rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780471135340

Pattern Classification A Unified View of Statistical and Neural Approaches

by ; ;
  • ISBN13:

    9780471135340

  • ISBN10:

    0471135348

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 1996-03-15
  • Publisher: Wiley-Interscience
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $230.34 Save up to $0.15
  • Buy New
    $230.19
    Add to Cart Free Shipping Icon Free Shipping

    PRINT ON DEMAND: 2-4 WEEKS. THIS ITEM CANNOT BE CANCELLED OR RETURNED.

Summary

PATTERN CLASSIFICATION a unified view of statistical and neural approaches The product of years of research and practical experience in pattern classification, this book offers a theory-based engineering perspective on neural networks and statistical pattern classification. Pattern Classification sheds new light on the relationship between seemingly unrelated approaches to pattern recognition, including statistical methods, polynomial regression, multilayer perceptron, and radial basis functions. Important topics such as feature selection, reject criteria, classifier performance measurement, and classifier combinations are fully covered, as well as material on techniques that, until now, would have required an extensive literature search to locate. A full program of illustrations, graphs, and examples helps make the operations and general properties of different classification approaches intuitively understandable. Offering a lucid presentation of complex applications and their algorithms, Pattern Classification is an invaluable resource for researchers, engineers, and graduate students in this rapidly developing field.

Author Biography

J_RGEN SCH_RMANN is head of the Pattern Understanding Group at the Daimler-Benz Research Center in Ulm, Germany, and teaches at the Technical University of Darmstadt.

Table of Contents

Preface xv
Introduction
1(18)
Recognition of Entities versus Model-Based Recognition
4(2)
Statistical Approach
6(2)
Neural Network Approach
8(3)
Conceptual Framework
11(8)
Classes
11(2)
Measurements and Features
13(2)
Observation Vector
15(1)
Pattern Classification System
15(2)
Learning
17(2)
Statistical Decision Theory
19(12)
The Pattern Source
20(1)
Risk Minimization
21(3)
Bayes Classifier
24(4)
General Structure of Pattern Classifier
28(3)
Need for Approximations: Fundamental Approaches
31(13)
Least Mean-Square Functional Approximations
33(7)
Hard and Soft Labeling
34(2)
Regression Function
36(2)
Comment
38(1)
Relations between Minimum Distance and Maximum A Posteriori Decisions
39(1)
Need for Approximations
39(1)
Statistical Modeling of Class-Specific Distributions
40(2)
Minimum-Distance Classification
42(2)
Classification Based on Statistical Models Determined by First- and Second-Order Statistical Moments
44(53)
Multidimensional Normal Distribution
45(8)
Definiteness of Quadratic Form
47(3)
Equidensity Ellipsoids
50(1)
Orientation and Size of Ellipsoids
50(2)
Some Further Properties of Normal Distribution
52(1)
Bayes Classifier for Normally Distributed Classes
53(14)
Example
55(3)
Confidences
58(2)
Quadratic Discriminant Functions
60(1)
Class Regions and Borders
61(3)
Forwarding Sets of Alternatives
64(1)
Scatterplots in Measurement and in Decision Space
65(2)
Simplification to Equal Covariance Matrices
67(5)
White Covariance Matrix
69(1)
Class Regions and Borders
69(3)
Euclidean and Mahalanobis Distance Classifiers
72(3)
Comments
74(1)
Statistically Independent Binary Measurements
75(4)
Impact of Variances of Binary Variables
77(2)
Parameter Estimation
79(10)
Statistical Moments
80(2)
Augmented Measurement Vector
82(1)
Parameter Estimation from Subsets
82(2)
Visualization of Statistical Parameters
84(3)
Interpretation
87(2)
Recursive Parameter Estimation
89(8)
Recursive Learning of Means
90(2)
Recursive Learning of Moment Matrices
92(1)
Recursive Learning of Inverse Convariance Matrix
93(2)
Comments
95(2)
Classification Based on Mean-Square Functional Approximations
97(5)
Polynomial Regression
102(85)
Adaptation of Coefficient Matrix
107(3)
Comments
109(1)
Properties of Solution
110(3)
Residual Variance
110(1)
Orthogonality of Estimation Error Δd
110(1)
Unbiasedness of Estimation d(v)
111(1)
Unity Sum of Components of d(v)
111(2)
Functional Approximation with y versus p as Target Vectors
113(3)
Comments
115(1)
Mean and Covariance of Estimation Error
116(6)
Class-Specific Means of Estimation Error
118(1)
Row and Column Sums of Estimation Error Covariance Matrix
119(1)
Error Bounds on Class-Specific Estimations
119(2)
Comments
121(1)
Covariance of Estimation Error and Residual Variance
122(1)
Covariance and Class-Specific Means of Discriminant Vector d
122(2)
Some Properties of Mapping V --> D from Measurement Space into Decision Space
124(3)
Training Polynomial Classifier: Solving Matrix Equation
127(5)
Linear Dependencies
128(1)
Predictor and To-be-Predicted Variables
129(1)
Solving Matrix Equation
130(2)
Feature Selection and Pivot Strategies
132(8)
Pivot Selection
133(4)
Maximum Linear Independence (LU) Strategy
137(1)
Minimum Residual Variance (MS) Strategy
138(2)
Sequence of Intermediate Solutions and Feature Ranking
140(1)
Establishing Moment Matrix M from Learning Set or from Statistical Models
140(2)
Regularities of Moment Matrices for Complete Polynomials
141(1)
Second-Degree Polynomial Classifier for Normally Distributed Classes
142(3)
Deriving Up to Fourth-Order Moments for Normally Distributed Classes
143(2)
Visualizations Based on Two-Dimensional Example of Chapter 4
145(6)
Confidence Mapping
151(14)
Motivation
152(2)
Imperfect Estimations Used as Feature Variables for Subsequent Classifier
154(4)
Computing Confidences from Eigen- and Fremd- Histograms
158(2)
Smooth Approximations for Confidence Mapping Function
160(1)
Generic Model for Confidence Mapping Function
161(1)
Confidence Mapping Applied to Examples of Section 6.12
162(1)
Comments
163(2)
Recursive Learning
165(18)
Recursive Learning Rule for Polynomial Classifier
173(1)
Modifications by Use of Simplified Weight Matrices G
174(1)
Why Changes in Weight Matrix G Do Not Disturb Convergence
175(1)
Stability Limits for Learning Factor α
176(2)
Role of Last Seen Sample in Recursive Learning
178(1)
Visualizations Based on Example of Section 4
179(1)
Comments
180(3)
Classifier Iteration
183(4)
Comments
186(1)
Multilayer Perceptron Regression
187(44)
Model Neuron
188(3)
Sigmoidal Activation Function
191(2)
Single Layer of Multilayer Perceptron
193(1)
Multilayer Perceptron
194(3)
Relations to Concept of Functional Approximation Based on Linear Combination of Basis Functions
196(1)
Appearance of Perceptron Basis Functions
197(5)
Linear Combination of Perceptron Basis Functions
198(3)
Perceptron Basis Functions in Multilayer Case
201(1)
Comments
202(1)
Backpropagation Learning
202(8)
Computing Gradient
204(1)
Partial Derivatives for hth Layer
205(1)
Partial Derivatives with Respect to Output Variables
206(1)
Singular and Cumulative Learning
207(2)
Comments
209(1)
Visualizations Based on Two-Dimensional Example of Chapter 4
210(5)
Constructive Design of Perceptron Basis Functions
215(8)
Motivation
217(1)
Pairwise Borders between Classes
218(3)
Visualization
221(2)
Properties of Multilayer Perceptron Regression
223(4)
Modifications of Multilayer Perceptron
227(4)
Incomplete Networks
227(1)
Weight Sharing
228(1)
Modified Optimization Criteria
229(1)
Comment
230(1)
Radial Basis Functions
231(22)
Relations to Nearest-Neighbor and Restricted-Neighborhood Techniques
233(9)
Euclidean One-Nearest-Neighbor Classifier
234(3)
Comments
237(1)
Restricted-Neighborhood Classifier
238(3)
Euclidean k-Nearest-Neighbor Classifier
241(1)
Clustering
242(6)
Vector Quantization Approach
244(4)
Radial Basis Function Approximations to prob(v/k)
248(2)
Radial Basis Function Approximations to prob(k/v)
250(3)
Outlook
251(2)
Measurements, Features, and Feature Selection
253(36)
Evaluation of Features Individually
256(6)
Mutual Information
257(1)
Correlation
258(2)
Relations to Minimum Residual Variance Strategy of Section 6.9
260(1)
Comments
261(1)
Rank-Order-Based Feature Selection
262(2)
Collective Evaluation of Feature Sets
264(6)
Relations to Class-Specific, Pooled, and Common Covariance Matrices
266(1)
Interpretations
267(2)
Relations to Minimum-Distance Classification
269(1)
Fisher Criterion
269(1)
Principal-Axis Transform and Its Neural Counterpart
270(19)
Translation and Rotation of Coordinate System
271(3)
Projections Based on New Coordinate System
274(1)
Flat-Galaxy Interpretation
275(2)
Whitening Transformation
277(1)
Two- and Three-Dimensional Views of High-Dimensional Spaces
277(3)
Reconstruction versus Classification
280(3)
Reduction to Basic Constituents
283(1)
Neural Principal-Axis Transform
284(5)
Reject Criteria and Classifier Performance
289(41)
Garbage Pattern Problem
290(3)
Reject Criteria
293(5)
Ambiguous Patterns and Sets of Alternatives
298(3)
Relations between Reject and Forming Sets of Alternatives
301(1)
Controlling Number of Alternatives
301(6)
Introduction to Multiple-Class Target Points in Decision Space
301(4)
Weighing Out Confidences
305(2)
Performance Measuring and Operating Characteristics
307(23)
Reclassification and Generalization
308(1)
Learning and Test Set Sizes
309(1)
Statistical Measures Describing Deterministic System
310(1)
Jackknifing and Leave One Out
310(1)
Variability of Error Rate Measurements
311(3)
Error and Reject Rates Depending on Reject Threshold
314(7)
Optimizing Reject Threshold
321(3)
Residual Error Rate and Mean Number of Alternatives Depending on Confidence Threshold
324(6)
Combining Classifiers
330(27)
Concatenating Classifiers
331(3)
Classifier Tuning
333(1)
Classifiers Working in Parallel
334(7)
Combination of Fast Low-Performance with Slow High-Performance Classifiers
334(1)
Voting
334(2)
Classifier Combination Following Dempster's Rule
336(2)
Combining Classifiers as Classification Task
338(3)
Hierarchical Classifiers
341(9)
Operations of Individual Node Classifier
342(2)
Confidence Distribution Network
344(1)
Pruning Classifier Tree
345(1)
Design of Tree Structure
346(2)
Adaptation of Node Classifiers
348(1)
Comments
349(1)
Classifier Networks
350(7)
Combination of Pairwise Estimations Viewed as Combination of Experts' Votes
352(1)
Combination Network
353(2)
Special Advantages
355(1)
Relations to Multilayer Perceptron
356(1)
Conclusion
357(3)
STATMOD Program: Description of ftp Package
360(4)
References 364(5)
Index 369

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program