rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9781584884576

Pattern Recognition Algorithms for Data Mining

by ;
  • ISBN13:

    9781584884576

  • ISBN10:

    1584884576

  • Format: Hardcover
  • Copyright: 2004-05-27
  • Publisher: Chapman & Hall/

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $140.00 Save up to $85.22
  • Rent Book $94.50
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 3-5 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

How To: Textbook Rental

Looking to rent a book? Rent Pattern Recognition Algorithms for Data Mining [ISBN: 9781584884576] for the semester, quarter, and short term or search our site for other textbooks by Pal; Sankar K.. Renting a textbook can save you up to 90% from the cost of buying.

Summary

Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks.Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.

Author Biography

Pabitra Mitra is an Assistant Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology, Kanpur.

Table of Contents

Foreword xiii
Preface xxi
List of Tables xxv
List of Figures xxvii
1 Introduction 1(28)
1.1 Introduction
1(2)
1.2 Pattern Recognition in Brief
3(4)
1.2.1 Data acquisition
4(1)
1.2.2 Feature selection/extraction
4(1)
1.2.3 Classification
5(2)
1.3 Knowledge Discovery in Databases (KDD)
7(3)
1.4 Data Mining
10(4)
1.4.1 Data mining tasks
10(2)
1.4.2 Data mining tools
12(1)
1.4.3 Applications of data mining
12(2)
1.5 Different Perspectives of Data Mining
14(3)
1.5.1 Database perspective
14(1)
1.5.2 Statistical perspective
15(1)
1.5.3 Pattern recognition perspective
15(1)
1.5.4 Research issues and challenges
16(1)
1.6 Scaling Pattern Recognition Algorithms to Large Data Sets
17(4)
1.6.1 Data reduction
17(1)
1.6.2 Dimensionality reduction
18(1)
1.6.3 Active learning
19(1)
1.6.4 Data partitioning
19(1)
1.6.5 Granular computing
20(1)
1.6.6 Efficient search algorithms
20(1)
1.7 Significance of Soft Computing in KDD
21(1)
1.8 Scope of the Book
22(7)
2 Multiscale Data Condensation 29(30)
2.1 Introduction
29(3)
2.2 Data Condensation Algorithms
32(2)
2.2.1 Condensed nearest neighbor rule
32(1)
2.2.2 Learning vector quantization
33(1)
2.2.3 Astrahan's density-based method
34(1)
2.3 Multiscale Representation of Data
34(3)
2.4 Nearest Neighbor Density Estimate
37(1)
2.5 Multiscale Data Condensation Algorithm
38(2)
2.6 Experimental Results and Comparisons
40(12)
2.6.1 Density estimation
41(1)
2.6.2 Test of statistical significance
41(6)
2.6.3 Classification: Forest cover data
47(1)
2.6.4 Clustering: Satellite image data
48(1)
2.6.5 Rule generation: Census data
49(3)
2.6.6 Study on scalability
52(1)
2.6.7 Choice of scale parameter
52(1)
2.7 Summary
52(7)
3 Unsupervised Feature Selection 59(24)
3.1 Introduction
59(1)
3.2 Feature Extraction
60(2)
3.3 Feature Selection
62(2)
3.3.1 Filter approach
63(1)
3.3.2 Wrapper approach
64(1)
3.4 Feature Selection Using Feature Similarity (FSFS)
64(7)
3.4.1 Feature similarity measures
65(3)
3.4.2 Feature selection through clustering
68(3)
3.5 Feature Evaluation Indices
71(3)
3.5.1 Supervised indices
71(1)
3.5.2 Unsupervised indices
72(1)
3.5.3 Representation entropy
73(1)
3.6 Experimental Results and Comparisons
74(8)
3.6.1 Comparison: Classification and clustering performance
74(5)
3.6.2 Redundancy reduction: Quantitative study
79(1)
3.6.3 Effect of cluster size
80(2)
3.7 Summary
82(1)
4 Active Learning Using Support Vector Machine 83(20)
4.1 Introduction
83(3)
4.2 Support Vector Machine
86(2)
4.3 Incremental Support Vector Learning with Multiple Points
88(1)
4.4 Statistical Query Model of Learning
89(2)
4.4.1 Query strategy
90(1)
4.4.2 Confidence factor of support vector set
90(1)
4.5 Learning Support Vectors with Statistical Queries
91(3)
4.6 Experimental Results and Comparison
94(7)
4.6.1 Classification accuracy and training time
94(3)
4.6.2 Effectiveness of the confidence factor
97(1)
4.6.3 Margin distribution
97(4)
4.7 Summary
101(2)
5 Rough-fuzzy Case Generation 103(20)
5.1 Introduction
103(2)
5.2 Soft Granular Computing
105(1)
5.3 Rough Sets
106(5)
5.3.1 Information systems
107(1)
5.3.2 Indiscernibility and set approximation
107(1)
5.3.3 Reducts
108(2)
5.3.4 Dependency rule generation
110(1)
5.4 Linguistic Representation of Patterns and Fuzzy Granulation
111(3)
5.5 Rough-fuzzy Case Generation Methodology
114(6)
5.5.1 Thresholding and rule generation
115(2)
5.5.2 Mapping dependency rules to cases
117(1)
5.5.3 Case retrieval
118(2)
5.6 Experimental Results and Comparison
120(1)
5.7 Summary
121(2)
6 Rough-fuzzy Clustering 123(26)
6.1 Introduction
123(1)
6.2 Clustering Methodologies
124(2)
6.3 Algorithms for Clustering Large Data Sets
126(3)
6.3.1 CLARANS: Clustering large applications based upon randomized search
126(1)
6.3.2 BIRCH: Balanced iterative reducing and clustering using hierarchies
126(1)
6.3.3 DBSCAN: Density-based spatial clustering of applications with noise
127(1)
6.3.4 STING: Statistical information grid
128(1)
6.4 CEMMiSTRI: Clustering using EM, Minimal Spanning Tree and Rough-fuzzy Initialization
129(6)
6.4.1 Mixture model estimation via EM algorithm
130(1)
6.4.2 Rough set initialization of mixture parameters
131(1)
6.4.3 Mapping reducts to mixture parameters
132(1)
6.4.4 Graph-theoretic clustering of Gaussian components
133(2)
6.5 Experimental Results and Comparison
135(4)
6.6 Multispectral Image Segmentation
139(8)
6.6.1 Discretization of image bands
141(1)
6.6.2 Integration of EM, MST and rough sets
141(1)
6.6.3 Index for segmentation quality
141(1)
6.6.4 Experimental results and comparison
141(6)
6.7 Summary
147(2)
7 Rough Self-Organizing Map 149(16)
7.1 Introduction
149(1)
7.2 Self-Organizing Maps (SOM)
150(2)
7.2.1 Learning
151(1)
7.2.2 Effect of neighborhood
152(1)
7.3 Incorporation of Rough Sets in SOM (RSOM)
152(2)
7.3.1 Unsupervised rough set rule generation
153(1)
7.3.2 Mapping rough set rules to network weights
153(1)
7.4 Rule Generation and Evaluation
154(2)
7.4.1 Extraction methodology
154(1)
7.4.2 Evaluation indices
155(1)
7.5 Experimental Results and Comparison
156(7)
7.5.1 Clustering and quantization error
157(5)
7.5.2 Performance of rules
162(1)
7.6 Summary
163(2)
8 Classification, Rule Generation and Evaluation using Modular Rough-fuzzy MLP 165(36)
8.1 Introduction
165(2)
8.2 Ensemble Classifiers
167(3)
8.3 Association Rules
170(3)
8.3.1 Rule generation algorithms
170(3)
8.3.2 Rule interestingness
173(1)
8.4 Classification Rules
173(2)
8.5 Rough-fuzzy MLP
175(3)
8.5.1 Fuzzy MLP
175(1)
8.5.2 Rough set knowledge encoding
176(2)
8.6 Modular Evolution of Rough-fuzzy MLP
178(6)
8.6.1 Algorithm
178(4)
8.6.2 Evolutionary design
182(2)
8.7 Rule Extraction and Quantitative Evaluation
184(5)
8.7.1 Rule extraction methodology
184(4)
8.7.2 Quantitative measures
188(1)
8.8 Experimental Results and Comparison
189(10)
8.8.1 Classification
190(2)
8.8.2 Rule extraction
192(7)
8.9 Summary
199(2)
A Role of Soft-Computing Tools in KDD 201(10)
A.1 Fuzzy Sets
201(5)
A.1.1 Clustering
202(1)
A.1.2 Association rules
203(1)
A.1.3 Functional dependencies
204(1)
A.1.4 Data summarization
204(1)
A.1.5 Web application
205(1)
A.1.6 Image retrieval
205(1)
A.2 Neural Networks
206(1)
A.2.1 Rule extraction
206(1)
A.2.2 Clustering and self organization
206(1)
A.2.3 Regression
207(1)
A.3 Neuro-fuzzy Computing
207(1)
A.4 Genetic Algorithms
208(1)
A.5 Rough Sets
209(1)
A.6 Other Hybridizations
210(1)
B Data Sets Used in Experiments 211(4)
References 215(22)
Index 237(6)
About the Authors 243

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program