What is included with this book?
A new edition of an introductory text in machine learning that gives a unified treatment of machine learning problems and solutions.
The goal of machine learning is to program computers to use example data or past experience to solve a given problem. Many successful applications of machine learning exist already, including systems that analyze past sales data to predict customer behavior, optimize robot behavior so that a task can be completed using minimum resources, and extract knowledge from bioinformatics data.
The second edition of Introduction to Machine Learning is a comprehensive textbook on the subject, covering a broad array of topics not usually included in introductory machine learning texts. In order to present a unified treatment of machine learning problems and solutions, it discusses many methods from different fields, including statistics, pattern recognition, neural networks, artificial intelligence, signal processing, control, and data mining. All learning algorithms are explained so that the student can easily move from the equations in the book to a computer program.
The text covers such topics as supervised learning, Bayesian decision theory, parametric methods, multivariate methods, multilayer perceptrons, local models, hidden Markov models, assessing and comparing classification algorithms, and reinforcement learning.
New to the second edition are chapters on kernel machines, graphical models, and Bayesian estimation; expanded coverage of statistical tests in a chapter on design and analysis of machine learning experiments; case studies available on the Web (with downloadable results for instructors); and many additional exercises.
All chapters have been revised and updated. Introduction to Machine Learning can be used by advanced undergraduates and graduate students who have completed courses in computer programming, probability, calculus, and linear algebra. It will also be of interest to engineers in the field who are concerned with the application of machine learning methods.
"This volume offers a very accessible introduction to the field of machine learning. Ethem Alpaydin gives a comprehensive exposition of the kinds of modeling and prediction problems addressed by machine learning, as well as an overview of the most common families of paradigms, algorithms, and techniques in the field. The volume will be particularly useful to the newcomer eager to quickly get a grasp of the elements that compose this relatively new and rapidly evolving field."-Joaquin Quiñonero-Candela, co-editor, Data-Set Shift in Machine Learning
Series Foreword | p. xvii |
Figures | p. xix |
Tables | p. xxix |
Preface | p. xxxi |
Acknowledgments | p. xxxiii |
Notes for the Second Edition | p. xxxv |
Notations | p. xxxix |
Introduction | p. 1 |
What Is Machine Learning? | p. 1 |
Examples of Machine Learning Applications | p. 4 |
Learning Associations | p. 4 |
Classification | p. 5 |
Regression | p. 9 |
Unsupervised Learning | p. 11 |
Reinforcement Learning | p. 13 |
Notes | p. 14 |
Relevant Resources | p. 16 |
Exercises | p. 18 |
References | p. 19 |
Supervised Learning | p. 21 |
Learning a Class from Examples | p. 21 |
Vapnik-Chervonenkis (VC) Dimension | p. 27 |
Probably Approximately Correct (PAC) Learning | p. 29 |
Noise | p. 30 |
Learning Multiple Classes | p. 32 |
Regression | p. 34 |
Model Selection and Generalization | p. 37 |
Dimensions of a Supervised Machine Learning Algorithm | p. 41 |
Notes | p. 42 |
Exercises | p. 43 |
References | p. 44 |
Bayesian Decision Theory | p. 47 |
Introduction | p. 47 |
Classification | p. 49 |
Losses and Risks | p. 51 |
Discriminant Functions | p. 53 |
Utility Theory | p. 54 |
Association Rules | p. 55 |
Notes | p. 58 |
Exercises | p. 58 |
References | p. 59 |
Parametric Methods | p. 61 |
Introduction | p. 61 |
Maximum Likelihood Estimation | p. 62 |
Bernoulli Density | p. 63 |
Multinomial Density | p. 64 |
Gaussian (Normal) Density | p. 64 |
Evaluating an Estimator: Bias and Variance | p. 65 |
The Bayes' Estimator | p. 66 |
Parametric Classification | p. 69 |
Regression | p. 73 |
Tuning Model Complexity: Bias/Variance Dilemma | p. 76 |
Model Selection Procedures | p. 80 |
Notes | p. 84 |
Exercises | p. 84 |
References | p. 85 |
Multivariate Methods | p. 87 |
Multivariate Data | p. 87 |
Parameter Estimation | p. 88 |
Estimation of Missing Values | p. 89 |
Multivariate Normal Distribution | p. 90 |
Multivariate Classification | p. 94 |
Tuning Complexity | p. 99 |
Discrete Features | p. 102 |
Multivariate Regression | p. 103 |
Notes | p. 105 |
Exercises | p. 106 |
References | p. 107 |
Dimensionality Reduction | p. 109 |
Introduction | p. 109 |
Subset Selection | p. 110 |
Principal Components Analysis | p. 113 |
Factor Analysis | p. 120 |
Multidimensional Scaling | p. 125 |
Linear Discriminant Analysis | p. 128 |
Isomap | p. 133 |
Locally Linear Embedding | p. 135 |
Notes | p. 138 |
Exercises | p. 139 |
References | p. 140 |
Clustering | p. 143 |
Introduction | p. 143 |
Mixture Densities | p. 144 |
k-Means Clustering | p. 145 |
Expectation-Maximization Algorithm | p. 149 |
Mixtures of Latent Variable Models | p. 154 |
Supervised Learning after Clustering | p. 155 |
Hierarchical Clustering | p. 157 |
Choosing the Number of Clusters | p. 158 |
Notes | p. 160 |
Exercises | p. 160 |
References | p. 161 |
Nonparametric Methods | p. 163 |
Introduction | p. 163 |
Nonparametric Density Estimation | p. 165 |
Histogram Estimator | p. 165 |
Kernel Estimator | p. 167 |
k-Nearest Neighbor Estimator | p. 168 |
Generalization to Multivariate Data | p. 170 |
Nonparametric Classification | p. 171 |
Condensed Nearest Neighbor | p. 172 |
Nonparametric Regression: Smoothing Models | p. 174 |
Running Mean Smoother | p. 175 |
Kernel Smoother | p. 176 |
Running Line Smoother | p. 177 |
How to Choose the Smoothing Parameter | p. 178 |
Notes | p. 180 |
Exercises | p. 181 |
References | p. 182 |
Decision Trees | p. 185 |
Introduction | p. 185 |
Univariate Trees | p. 187 |
Classification Trees | p. 188 |
Regression Trees | p. 192 |
Pruning | p. 194 |
Rule Extraction from Trees | p. 197 |
Learning Rules from Data | p. 198 |
Multivariate Trees | p. 202 |
Notes | p. 204 |
Exercises | p. 207 |
References | p. 207 |
Linear Discrimination | p. 209 |
Introduction | p. 209 |
Generalizing the Linear Model | p. 211 |
Geometry of the Linear Discriminant | p. 212 |
Two Classes | p. 212 |
Multiple Classes | p. 214 |
Pairwise Separation | p. 216 |
Parametric Discrimination Revisited | p. 217 |
Gradient Descent | p. 218 |
Logistic Discrimination | p. 220 |
Two Classes | p. 220 |
Multiple Classes | p. 224 |
Discrimination by Regression | p. 228 |
Notes | p. 230 |
Exercises | p. 230 |
References | p. 231 |
Multilayer Perceptrons | p. 233 |
Introduction | p. 233 |
Understanding the Brain | p. 234 |
Neural Networks as a Paradigm for Parallel Processing | p. 235 |
The Perceptron | p. 237 |
Training a Perceptron | p. 240 |
Learning Boolean Functions | p. 243 |
Multilayer Perceptrons | p. 245 |
MLP as a Universal Approximator | p. 248 |
Backpropagation Algorithm | p. 249 |
Nonlinear Regression | p. 250 |
Two-Class Discrimination | p. 252 |
Multiclass Discrimination | p. 254 |
Multiple Hidden Layers | p. 256 |
Training Procedures | p. 256 |
Improving Convergence | p. 256 |
Overtraining | p. 257 |
Structuring the Network | p. 258 |
Hints | p. 261 |
Tuning the Network Size | p. 263 |
Bayesian View of Learning | p. 266 |
Dimensionality Reduction | p. 267 |
Learning Time | p. 270 |
Time Delay Neural Networks | p. 270 |
Recurrent Networks | p. 271 |
Notes | p. 272 |
Exercises | p. 274 |
References | p. 275 |
Local Models | p. 279 |
Introduction | p. 279 |
Competitive Learning | p. 280 |
Online k-Means | p. 280 |
Adaptive Resonance Theory | p. 285 |
Self-Organizing Maps | p. 286 |
Radial Basis Functions | p. 288 |
Incorporating Rule-Based Knowledge | p. 294 |
Normalized Basis Functions | p. 295 |
Competitive Basis Functions | p. 297 |
Learning Vector Quantization | p. 300 |
Mixture of Experts | p. 300 |
Cooperative Experts | p. 303 |
Competitive Experts | p. 304 |
Hierarchical Mixture of Experts | p. 304 |
Notes | p. 305 |
Exercises | p. 306 |
References | p. 307 |
Kernel Machines | p. 309 |
Introduction | p. 309 |
Optimal Separating Hyperplane | p. 311 |
The Nonseparable Case: Soft Margin Hyperplane | p. 315 |
¿-SVM | p. 318 |
Kernel Trick | p. 319 |
Vectorial Kernels | p. 321 |
Defining Kernels | p. 324 |
Multiple Kernel Learning | p. 325 |
Multiclass Kernel Machines | p. 327 |
Kernel Machines for Regression | p. 328 |
One-Class Kernel Machines | p. 333 |
Kernel Dimensionality Reduction | p. 335 |
Notes | p. 337 |
Exercises | p. 338 |
References | p. 339 |
Bayesian Estimation | p. 341 |
Introduction | p. 341 |
Estimating the Parameter of a Distribution | p. 343 |
Discrete Variables | p. 343 |
Continuous Variables | p. 345 |
Bayesian Estimation of the Parameters of a Function | p. 348 |
Regression | p. 348 |
The Use of Basis/Kernel Functions | p. 352 |
Bayesian Classification | p. 353 |
Gaussian Processes | p. 356 |
Notes | p. 359 |
Exercises | p. 360 |
References | p. 361 |
Hidden Markov Models | p. 363 |
Introduction | p. 363 |
Discrete Markov Processes | p. 364 |
Hidden Markov Models | p. 367 |
Three Basic Problems of HMMs | p. 369 |
Evaluation Problem | p. 369 |
Finding the State Sequence | p. 373 |
Learning Model Parameters | p. 375 |
Continuous Observations | p. 378 |
The HMM with Input | p. 379 |
Model Selection in HMM | p. 380 |
Notes | p. 382 |
Exercises | p. 383 |
References | p. 384 |
Graphical Models | p. 387 |
Introduction | p. 387 |
Canonical Cases for Conditional Independence | p. 389 |
Example Graphical Models | p. 396 |
Naive Bayes' Classifier | p. 396 |
Hidden Markov Model | p. 398 |
Linear Regression | p. 401 |
d-Separation | p. 402 |
Belief Propagation | p. 402 |
Chains | p. 403 |
Trees | p. 405 |
Polytrees | p. 407 |
Junction Trees | p. 409 |
Undirected Graphs: Markov Random Fields | p. 410 |
Learning the Structure of a Graphical Model | p. 413 |
Influence Diagrams | p. 414 |
Notes | p. 414 |
Exercises | p. 417 |
References | p. 417 |
Combining Multiple Learners | p. 419 |
Rationale | p. 419 |
Generating Diverse Learners | p. 420 |
Model Combination Schemes | p. 423 |
Voting | p. 424 |
Error-Correcting Output Codes | p. 427 |
Bagging | p. 430 |
Boosting | p. 431 |
Mixture of Experts Revisited | p. 434 |
Stacked Generalization | p. 435 |
Fine-Tuning an Ensemble | p. 437 |
Cascading | p. 438 |
Notes | p. 440 |
Exercises | p. 442 |
References | p. 443 |
Reinforcement Learning | p. 447 |
Introduction | p. 447 |
Single State Case: K-Armed Bandit | p. 449 |
Elements of Reinforcement Learning | p. 450 |
Model-Based Learning | p. 453 |
Value Iteration | p. 453 |
Policy Iteration | p. 454 |
Temporal Difference Learning | p. 454 |
Exploration Strategies | p. 455 |
Deterministic Rewards and Actions | p. 456 |
Nondeterministic Rewards and Actions | p. 457 |
Eligibility Traces | p. 459 |
Generalization | p. 461 |
Partially Observable States | p. 464 |
The Setting | p. 464 |
Example: The Tiger Problem | p. 465 |
Notes | p. 470 |
Exercises | p. 472 |
References | p. 473 |
Design and Analysis of Machine Learning Experiments | p. 475 |
Introduction | p. 475 |
Factors, Response, and Strategy of Experimentation | p. 478 |
Response Surface Design | p. 481 |
Randomization, Replication, and Blocking | p. 482 |
Guidelines for Machine Learning Experiments | p. 483 |
Cross-Validation and Resampling Methods | p. 486 |
K-Fold Cross-Validation | p. 487 |
5×2 Cross-Validation | p. 488 |
Bootstrapping | p. 489 |
Measuring Classifier Performance | p. 489 |
Interval Estimation | p. 493 |
Hypothesis Testing | p. 496 |
Assessing a Classification Algorithm's Performance | p. 498 |
Binomial Test | p. 499 |
Approximate Normal Test | p. 500 |
t Test | p. 500 |
Comparing Two Classification Algorithms | p. 501 |
McNemar's Test | p. 501 |
K-Fold Cross-Validated Paired t Test | p. 501 |
5 × 2 cv Paired t Test | p. 502 |
5 × 2 cv Paired F Test | p. 503 |
Comparing Multiple Algorithms: Analysis of Variance | p. 504 |
Comparison over Multiple Datasets | p. 508 |
Comparing Two Algorithms | p. 509 |
Multiple Algorithms | p. 511 |
Notes | p. 512 |
Exercises | p. 513 |
References | p. 514 |
Probability | p. 517 |
Elements of Probability | p. 517 |
Axioms of Probability | p. 518 |
Conditional Probability | p. 518 |
Random Variables | p. 519 |
Probability Distribution and Density Functions | p. 519 |
Joint Distribution and Density Functions | p. 520 |
Conditional Distributions | p. 520 |
Bayes' Rule | p. 521 |
Expectation | p. 521 |
Variance | p. 522 |
Weak Law of Large Numbers | p. 523 |
Special Random Variables | p. 523 |
Bernoulli Distribution | p. 523 |
Binomial Distribution | p. 524 |
Multinomial Distribution | p. 524 |
Uniform Distribution | p. 524 |
Normal (Gaussian) Distribution | p. 525 |
Chi-Square Distribution | p. 526 |
t Distribution | p. 527 |
F Distribution | p. 527 |
References | p. 527 |
Index | p. 529 |
Table of Contents provided by Publisher. All Rights Reserved. |