9780131471399

Neural Networks and Learning Machines

by
  • ISBN13:

    9780131471399

  • ISBN10:

    0131471392

  • Edition: 3rd
  • Format: Hardcover
  • Copyright: 2008-11-18
  • Publisher: Pearson

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
  • We Buy This Book Back!
    In-Store Credit: $12.60
    Check/Direct Deposit: $12.00
List Price: $254.99 Save up to $193.80
  • Rent Book $140.24
    Add to Cart Free Shipping

    TERM
    PRICE
    DUE
    CURRENTLY AVAILABLE, USUALLY SHIPS IN 24-48 HOURS

Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
  • The Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Summary

Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technology's origin, capabilities, and potential applications.Examines all the important aspects of this emerging technolgy, covering the learning process, back propogation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed and perform in practice. Chapter objectives, problems, worked examples, a bibliography, photographs, illustrations, and a thorough glossary all reinforce concepts throughout. New chapters delve into such areas as support vector machines, and reinforcement learning/neurodynamic programming, plus readers will find an entire chapter of case studies to illustrate the real-life, practical applications of neural networks. A highly detailed bibliography is included for easy reference.For professional engineers and research scientists.

Table of Contents

Prefacep. x
Introductionp. 1
What is a Neural Network?p. 1
The Human Brainp. 6
Models of a Neuronp. 10
Neural Networks Viewed As Directed Graphsp. 15
Feedbackp. 18
Network Architecturesp. 21
Knowledge Representationp. 24
Learning Processesp. 34
Learning Tasksp. 38
Concluding Remarksp. 45
Notes and Referencesp. 46
Rosenblatt's Perceptronp. 47
Introductionp. 47
Perceptronp. 48
The Perceptron Convergence Theoremp. 50
Relation Between the Perceptron and Bayes Classifier for a Gaussian Environmentp. 55
Computer Experiment: Pattern Classificationp. 60
The Batch Perceptron Algorithmp. 62
Summary and Discussionp. 65
Notes and Referencesp. 66
Problemsp. 66
Model Building through Regressionp. 68
Introductionp. 68
Linear Regression Model: Preliminary Considerationsp. 69
Maximum a Posteriori Estimation of the Parameter Vectorp. 71
Relationship Between Regularized Least-Squares Estimation and MAP Estimationp. 76
Computer Experiment: Pattern Classificationp. 77
The Minimum-Description-Length Principlep. 79
Finite Sample-Size Considerationsp. 82
The Instrumental-Variables Methodp. 86
Summary and Discussionp. 88
Notes and Referencesp. 89
Problemsp. 89
The Least-Mean-Square Algorithmp. 91
Introductionp. 91
Filtering Structure of the LMS Algorithmp. 92
Unconstrained Optimization: a Reviewp. 94
The Wiener Filterp. 100
The Least-Mean-Square Algorithmp. 102
Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filterp. 104
The Langevin Equation: Characterization of Brownian Motionp. 106
Kushner's Direct-Averaging Methodp. 107
Statistical LMS Learning Theory for Small Learning-Rate Parameterp. 108
Computer Experiment I: Linear Predictionp. 110
Computer Experiment II: Pattern Classificationp. 112
Virtues and Limitations of the LMS Algorithmp. 113
Learning-Rate Annealing Schedulesp. 115
Summary and Discussionp. 117
Notes and Referencesp. 118
Problemsp. 119
Multilayer Perceptronsp. 122
Introductionp. 123
Some Preliminariesp. 124
Batch Learning and On-Line Learningp. 126
The Back-Propagation Algorithmp. 129
XOR Problemp. 141
Heuristics for Making the Back-Propagation Algorithm Perform Betterp. 144
Computer Experiment: Pattern Classificationp. 150
Back Propagation and Differentiationp. 153
The Hessian and Its Role in On-Line Learningp. 155
Optimal Annealing and Adaptive Control of the Learning Ratep. 157
Generalizationp. 164
Approximations of Functionsp. 166
Cross-Validationp. 171
Complexity Regularization and Network Pruningp. 175
Virtues and Limitations of Back-Propagation Learningp. 180
Supervised Learning Viewed as an Optimization Problemp. 186
Convolutional Networksp. 201
Nonlinear Filteringp. 203
Small-Scale Versus Large-Scale Learning Problemsp. 209
Summary and Discussionp. 217
Notes and Referencesp. 219
Problemsp. 221
Kernel Methods and Radial-Basis Function Networksp. 230
Introductionp. 230
Cover's Theorem on the Separability of Patternsp. 231
The Interpolation Problemp. 236
Radial-Basis-Function Networksp. 239
K-Means Clusteringp. 242
Recursive Least-Squares Estimation of the Weight Vectorp. 245
Hybrid Learning Procedure for RBF Networksp. 249
Computer Experiment: Pattern Classificationp. 250
Interpretations of the Gaussian Hidden Unitsp. 252
Kernel Regression and Its Relation to RBF Networksp. 255
Summary and Discussionp. 259
Notes and Referencesp. 261
Problemsp. 263
Support Vector Machinesp. 268
Introductionp. 268
Optimal Hyperplane for Linearly Separable Patternsp. 269
Optimal Hyperplane for Nonseparable Patternsp. 276
The Support Vector Machine Viewed as a Kernel Machinep. 281
Design of Support Vector Machinesp. 284
XOR Problemp. 286
Computer Experiment: Pattern Classificationp. 289
Regression: Robustness Considerationsp. 289
Optimal Solution of the Linear Regression Problemp. 293
The Representer Theorem and Related Issuesp. 296
Summary and Discussionp. 302
Notes and Referencesp. 304
Problemsp. 307
Regularization Theoryp. 313
Introductionp. 313
Hadamard's Conditions for Well-Posednessp. 314
Tikhonov's Regularization Theoryp. 315
Regularization Networksp. 326
Generalized Radial-Basis-Function Networksp. 327
The Regularized Least-Squares Estimator: Revisitedp. 331
Additional Notes of Interest on Regularizationp. 335
Estimation of the Regularization Parameterp. 336
Semisupervised Learningp. 342
Manifold Regularization: Preliminary Considerationsp. 343
Differentiable Manifoldsp. 345
Generalized Regularization Theoryp. 348
Spectral Graph Theoryp. 350
Generalized Representer Theoremp. 352
Laplacian Regularized Least-Squares Algorithmp. 354
Experiments on Pattern Classification Using Semisupervised Learningp. 356
Summary and Discussionp. 359
Notes and Referencesp. 361
Problemsp. 363
Principal-Components Analysisp. 367
Introductionp. 367
Principles of Self-Organizationp. 368
Self-Organized Feature Analysisp. 372
Principal-Components Analysis: Perturbation Theoryp. 373
Hebbian-Based Maximum Eigenfilterp. 383
Hebbian-Based Principal-Components Analysisp. 392
Case Study: Image Codingp. 398
Kernel Principal-Components Analysisp. 401
Basic Issues Involved in the Coding of Natural Imagesp. 406
Kernel Hebbian Algorithmp. 407
Summary and Discussionp. 412
Notes and Referencesp. 415
Problemsp. 418
Self-Organizing Mapsp. 425
Introductionp. 425
Two Basic Feature-Mapping Modelsp. 426
Self-Organizing Mapp. 428
Properties of the Feature Mapp. 437
Computer Experiments I: Disentangling Lattice Dynamics Using SOMp. 445
Contextual Mapsp. 447
Hierarchical Vector Quantizationp. 450
Kernel Self-Organizing Mapp. 454
Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOMp. 462
Relationship Between Kernel SOM and Kullback-Leibler Divergencep. 464
Summary and Discussionp. 466
Notes and Referencesp. 468
Problemsp. 470
Information-Theoretic Learning Modelsp. 475
Introductionp. 476
Entropyp. 477
Maximum-Entropy Principlep. 481
Mutual Informationp. 484
Kullback-Leibler Divergencep. 486
Copulasp. 489
Mutual Information as an Objective Function to be Optimizedp. 493
Maximum Mutual Information Principlep. 494
Infomax and Redundancy Reductionp. 499
Spatially Coherent Featuresp. 501
Spatially Incoherent Featuresp. 504
Independent-Components Analysisp. 508
Sparse Coding of Natural Images and Comparison with ICA Codingp. 514
Natural-Gradient Learning for Independent-Components Analysisp. 516
Maximum-Likelihood Estimation for Independent-Components Analysisp. 526
Maximum-Entropy Learning for Blind Source Separationp. 529
Maximization of Negentropy for Independent-Components Analysisp. 534
Coherent Independent-Components Analysisp. 541
Rate Distortion Theory and Information Bottleneckp. 549
Optimal Manifold Representation of Datap. 553
Computer Experiment: Pattern Classificationp. 560
Summary and Discussionp. 561
Notes and Referencesp. 564
Problemsp. 572
Stochastic Methods Rooted in Statistical Mechanicsp. 579
Introductionp. 580
Statistical Mechanicsp. 580
Markov Chainsp. 582
Metropolis Algorithmp. 591
Simulated Annealingp. 594
Gibbs Samplingp. 596
Boltzmann Machinep. 598
Logistic Belief Netsp. 604
Deep Belief Netsp. 606
Deterministic Annealingp. 610
Analogy of Deterministic Annealing with Expectation-Maximization Algorithmp. 616
Summary and Discussionp. 617
Notes and Referencesp. 619
Problemsp. 621
Dynamic Programmingp. 627
Introductionp. 627
Markov Decision Processp. 629
Bellman's Optimality Criterionp. 631
Policy Iterationp. 635
Value Iterationp. 637
Approximate Dynamic Programming: Direct Methodsp. 642
Temporal-Difference Learningp. 643
Q-Learningp. 648
Approximate Dynamic Programming: Indirect Methodsp. 652
Least-Squares Policy Evaluationp. 655
Approximate Policy Iterationp. 660
Summary and Discussionp. 663
Notes and Referencesp. 665
Problemsp. 668
Neurodynamicsp. 672
Introductionp. 672
Dynamic Systemsp. 674
Stability of Equilibrium Statesp. 678
Attractorsp. 684
Neurodynamic Modelsp. 686
Manipulation of Attractors as a Recurrent Network Paradigmp. 689
Hopfield Modelp. 690
The Cohen-Grossberg Theoremp. 703
Brain-State-In-A-Box Modelp. 705
Strange Attractors and Chaosp. 711
Dynamic Reconstruction of a Chaotic Processp. 716
Summary and Discussionp. 722
Notes and Referencesp. 724
Problemsp. 727
Bayseian Filtering for State Estimation of Dynamic Systemsp. 731
Introductionp. 731
State-Space Modelsp. 732
Kalman Filtersp. 736
The Divergence-Phenomenon and Square-Root Filteringp. 744
The Extended Kalman Filterp. 750
The Bayesian Filterp. 755
Cubature Kalman Filter: Building on the Kalman Filterp. 759
Particle Filtersp. 765
Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filtersp. 775
Kalman Filtering in Modeling of Brain Functionsp. 777
Summary and Discussionp. 780
Notes and Referencesp. 782
Problemsp. 784
Dynamically Driven Recurrent Networksp. 790
Introductionp. 790
Recurrent Network Architecturesp. 791
Universal Approximation Theoremp. 797
Controllability and Observabilityp. 799
Computational Power of Recurrent Networksp. 804
Learning Algorithmsp. 806
Back Propagation Through Timep. 808
Real-Time Recurrent Learningp. 812
Vanishing Gradients in Recurrent Networksp. 818
Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimatorsp. 822
Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractorp. 829
Adaptivity Considerationsp. 831
Case Study: Model Reference Applied to Neurocontrolp. 833
Summary and Discussionp. 835
Notes and Referencesp. 839
Problemsp. 842
Bibliographyp. 845
Indexp. 889
Table of Contents provided by Ingram. All Rights Reserved.

Rewards Program

Write a Review