Introduction to Neural Networks | p. 1 |
Properties of Neural Networks | p. 3 |
Neural Network Learning | p. 5 |
Supervised Learning | p. 5 |
Unsupervised Learning | p. 5 |
Perceptron | p. 6 |
Adaline and Least Mean Square Algorithm | p. 8 |
Multilayer Perceptron and Backpropagation Algorithm | p. 9 |
Output Layer Learning | p. 11 |
Hidden Layer Learning | p. 11 |
Radial Basis Function Networks | p. 12 |
Support Vector Machines | p. 13 |
Principles of Sensitivity Analysis | p. 17 |
Perturbations in Neural Networks | p. 17 |
Neural Network Sensitivity Analysis | p. 18 |
Fundamental Methods of Sensitivity Analysis | p. 21 |
Geometrical Approach | p. 21 |
Statistical Approach | p. 23 |
Summary | p. 24 |
Hyper-Rectangle Model | p. 25 |
Hyper-Rectangle Model for Input Space of MLP | p. 25 |
Sensitivity Measure of MLP | p. 26 |
Discussion | p. 27 |
Sensitivity Analysis with Parameterized Activation Function | p. 29 |
Parameterized Antisymmetric Squashing Function | p. 29 |
Sensitivity Measure | p. 30 |
Summary | p. 31 |
Localized Generalization Error Model | p. 33 |
Introduction | p. 33 |
The Localized Generalization Error Model | p. 35 |
The Q-Neighborhood and Q-Union | p. 36 |
The Localized Generalization Error Bound | p. 36 |
Stochastic Sensitivity Measure for RBFNN | p. 38 |
Characteristics of the Error Bound | p. 40 |
Comparing Two Classifiers Using the Error Bound | p. 42 |
Architecture Selection Using the Error Bound | p. 42 |
Parameters for MC2SG | p. 44 |
RBFNN Architecture Selection Algorithm for MC2SG | p. 44 |
A Heuristic Method to Reduce the Computational Time for MC2SG | p. 45 |
Summary | p. 45 |
Critical Vector Learning for RBF Networks | p. 47 |
Related Work | p. 47 |
Construction of RBF Networks with Sensitivity Analysis | p. 48 |
RBF Classifiers' Sensitivity to the Kernel Function Centers | p. 49 |
Orthogonal Least Square Transform | p. 51 |
Critical Vector Selection | p. 52 |
Summary | p. 52 |
Sensitivity Analysis of Prior Knowledge | p. 55 |
KBANNs | p. 56 |
Inductive Bias | p. 58 |
Sensitivity Analysis and Measures | p. 59 |
Output-Pattern Sensitivity | p. 59 |
Output-Weight Sensitivity | p. 60 |
Output-H Sensitivity | p. 61 |
Euclidean Distance | p. 61 |
Promoter Recognition | p. 61 |
Data and Initial Domain Theory | p. 62 |
Experimental Methodology | p. 63 |
Discussion and Conclusion | p. 64 |
Applications | p. 69 |
Input Dimension Reduction | p. 69 |
Sensitivity Matrix | p. 70 |
Criteria for Pruning Inputs | p. 70 |
Network Optimization | p. 71 |
Selective Learning | p. 74 |
Hardware Robustness | p. 76 |
Measure of Nonlinearity | p. 77 |
Parameter Tuning for Neocognitron | p. 78 |
Receptive Field | p. 79 |
Selectivity | p. 80 |
Sensitivity Analysis of the Neocognitron | p. 80 |
Bibliography | p. 83 |
Table of Contents provided by Ingram. All Rights Reserved. |
The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.