rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780849333750

Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition

by ;
  • ISBN13:

    9780849333750

  • ISBN10:

    084933375X

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 2006-09-12
  • Publisher: Auerbach Public

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

List Price: $155.00 Save up to $50.37
  • Rent Book $104.63
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 3-5 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

How To: Textbook Rental

Looking to rent a book? Rent Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition [ISBN: 9780849333750] for the semester, quarter, and short term or search our site for other textbooks by Samarasinghe; Sandhya. Renting a textbook can save you up to 90% from the cost of buying.

Summary

In response to an increasing demand for novel computing methods, Neural Networks for Applied Sciences and Engineering provides a simple but systematic introduction to neural networks applications. This book features case studies that use real data to demonstrate practical applications. It contains in-depth discussions of data and model validation issues along with uncertainty and sensitivity assessment of models as well as data dimensionality and methods to reduce dimensionality. It provides detailed coverage of neural network types for extracting nonlinear patterns in multi-dimensional scientific data in prediction, classification, clustering and forecasting with an extensive coverage on linear networks, multi-layer perceptron, self organization maps, and recurrent networks.

Author Biography

Sandhya Samarasinghe obtained her MSc in Mechanical Engineering from Lumumba University in Russia and an MS and PhD in Engineering from Virginia Tech, USA

Table of Contents

Preface xvii
Acknowledgments xxi
About the Author xxiii
From Data to Models: Complexity and Challenges in Understanding Biological, Ecological, and Natural Systems
1(10)
Introduction
1(3)
Layout of the Book
4(7)
References
7(4)
Fundamentals of Neural Networks and Models for Linear Data Analysis
11(58)
Introduction and Overview
11(1)
Neural Networks and Their Capabilities
12(4)
Inspirations from Biology
16(2)
Modeling Information Processing in Neurons
18(1)
Neuron Models and Learning Strategies
19(47)
Threshold Neuron as a Simple Classifier
20(3)
Learning Models for Neurons and Neural Assemblies
23(1)
Hebbian Learning
23(3)
Unsupervised or Competitive Learning
26(1)
Supervised Learning
26(1)
Perception with Supervised Learning as a Classifier
27(1)
Perception Learning Algorithm
28(7)
A Practical Example of Perception on a Larger Realistic Data Set: Identifying the Origin of Fish from the Growth-Ring Diameter of Scales
35(3)
Comparison of Perceptron with Linear Discriminant Function Analysis in Statistics
38(2)
Multi-Output Perception for Multicategory Classification
40(5)
Higher-Dimensional Classification Using Perceptron
45(1)
Perceptron Summary
45(1)
Linear Neuron for Linear Classification and Prediction
46(1)
Learning with the Delta Rule
47(4)
Linear Neuron as a Classifier
51(2)
Classification Properties of a Linear Neuron as a Subset of Predictive Capabilities
53(1)
Example: Linear Neuron as a Predictor
54(7)
A Practical Example of Linear Prediction: Predicting the Heat Influx in a Home
61(1)
Comparison of Linear Neuron Model with Linear Regression
62(1)
Example: Multiple Input Linear Neuron Model---Improving the Prediction Accuracy of Heat Influx in a Home
63(1)
Comparison of a Multiple-Input Linear Neuron with Multiple Linear Regression
63(1)
Multiple Linear Neuron Models
64(1)
Comparison of a Multiple Linear Neuron Network with Canonical Correlation Analysis
65(1)
Linear Neuron and Linear Network Summary
65(1)
Summary
66(3)
Problems
66(1)
References
67(2)
Neural Networks for Nonlinear Pattern Recognition
69(44)
Overview and Introduction
69(3)
Multilayer Perceptron
71(1)
Nonlinear Neurons
72(8)
Neuron Activation Functions
73(1)
Sigmoid Functions
74(2)
Gaussian Functions
76(1)
Example: Population Growth Modeling Using a Nonlinear Neuron
77(3)
Comparison of Nonlinear Neuron with Nonlinear Regression Analysis
80(1)
One-Input Multilayer Nonlinear Networks
80(18)
Processing with a Single Nonlinear Hidden Neuron
80(6)
Examples: Modeling Cyclical Phenomena with Multiple Nonlinear Neurons
86(1)
Example 1: Approximating a Square Wave
86(8)
Example 2: Modeling Seasonal Species Migration
94(4)
Two-Input Multilayer Perceptron Network
98(5)
Processing of Two-Dimensional Inputs by Nonlinear Neurons
98(4)
Network Output
102(1)
Examples: Two-Dimensional Prediction and Classification
103(6)
Example 1: Two-Dimensional Nonlinear Function Approximation
103(2)
Example 2: Two-Dimensional Nonlinear Classification Model
105(4)
Multidimensional Data Modeling with Nonlinear Multilayer Perceptron Networks
109(1)
Summary
110(3)
Problems
110(2)
References
112(1)
Learning of Nonlinear Patterns by Neural Networks
113(82)
Introduction and Overview
113(1)
Supervised Training of Networks for Nonlinear Pattern Recognition
114(1)
Gradient Descent and Error Minimization
115(1)
Backpropagation Learning
116(36)
Example: Backpropagation Training---A Hand Computation
117(3)
Error Gradient with Respect to Output Neuron Weights
120(3)
The Error Gradient with Respect to the Hidden-Neuron Weights
123(4)
Application of Gradient Descent in Backpropagation Learning
127(1)
Batch Learning
128(2)
Learning Rate and Weight Update
130(4)
Example-by-Example (Online) Learning
134(1)
Momentum
134(4)
Example: Backpropagation Learning Computer Experiment
138(3)
Single-Input Single-Output Network with Multiple Hidden Neurons
141(1)
Multiple-Input, Multiple-Hidden Neuron, and Single-Output Network
142(1)
Multiple-Input, Multiple-Hidden Neuron, Multiple-Output Network
143(2)
Example: Backpropagation Learning Case Study---Solving a Complex Classification Problem
145(7)
Delta-Bar-Delta Learning (Adaptive Learning Rate) Method
152(11)
Example: Network Training with Delta-Bar-Delta---A Hand Computation
154(3)
Example: Delta-Bar-Delta with Momentum---A Hand Computation
157(1)
Network Training with Delta-Bar Delta---A Computer Experiment
158(1)
Comparison of Delta-Bar-Delta Method with Backpropagation
159(1)
Example: Network Training with Delta-Bar-Delta---A Case Study
160(3)
Steepest Descent Method
163(3)
Example: Network Training with Steepest Descent---Hand Computation
163(1)
Example: Network Training with Steepest Descent---A Computer Experiment
164(2)
Second-Order Methods of Error Minimization and Weight Optimization
166(26)
QuickProp
167(1)
Example: Network Training with QuickProp---A Hand Computation
168(2)
Example: Network Training with QuickProp---A Computer Experiment
170(1)
Comparison of QuickProp with Steepest Descent, Delta-Bar-Delta, and Backpropagation
170(2)
General Concept of Second-Order Methods of Error Minimization
172(2)
Gauss--Newton Method
174(2)
Network Training with the Gauss--Newton Method---A Hand Computation
176(2)
Example: Network Training with Gauss---Newton Method---A Computer Experiment
178(2)
The Levenberg--Marquardt Method
180(2)
Example: Network Training with LM Method---A Hand Computation
182(1)
Network Training with the LM Method---A Computer Experiment
183(1)
Comparison of the Efficiency of the First-Order and Second-Order Methods in Minimizing Error
184(1)
Comparison of the Convergence Characteristics of First-Order and Second-Order Learning Methods
185(2)
Backpropagation
187(1)
Steepest Descent Method
188(1)
Gauss--Newton Method
189(1)
Levenberg--Marquardt Method
190(2)
Summary
192(3)
Problems
192(1)
References
193(2)
Implementation of Neural Network Models for Extracting Reliable Patterns from Data
195(50)
Introduction and Overview
195(1)
Bias--Variance Tradeoff
196(1)
Improving Generalization of Neural Networks
197(24)
Illustration of Early Stopping
199(4)
Effect of Initial Random Weights
203(3)
Weight Structure of the Trained Networks
206(1)
Effect of Random Sampling
207(5)
Effect of Model Complexity: Number of Hidden Neurons
212(1)
Summary on Early Stopping
213(2)
Regularization
215(6)
Reducing Structural Complexity of Networks by Pruning
221(16)
Optimal Brain Damage
222(1)
Example of Network Pruning with Optimal Brain Damage
223(6)
Network Pruning Based on Variance of Network Sensitivity
229(3)
Illustration of Application of Variance Nullity in Pruning Weights
232(3)
Pruning Hidden Neurons Based on Variance Nullity of Sensitivity
235(2)
Robustness of a Network to Perturbation of Weights
237(4)
Confidence Intervals for Weights
239(2)
Summary
241(4)
Problems
242(1)
References
243(2)
Data Exploration, Dimensionality Reduction, and Feature Extraction
245(38)
Introduction and Overview
245(3)
Example: Thermal Conductivity of Wood in Relation to Correlated Input Data
247(1)
Data Visualization
248(3)
Correlation Scatter Plots and Histograms
248(1)
Parallel Visualization
249(1)
Projecting Multidimensional Data onto Two-Dimensional Plane
250(1)
Correlation and Covariance between Variables
251(2)
Normalization of Data
253(6)
Standardization
253(1)
Simple Range Scaling
254(1)
Whitening---Normalization of Correlated Multivariate Data
255(4)
Selecting Relevant Inputs
259(3)
Statistical Tools for Variable Selection
260(1)
Partial Correlation
260(1)
Multiple Regression and Best-Subsets Regression
261(1)
Dimensionality Reduction and Feature Extraction
262(6)
Multicollinearity
262(1)
Principal Component Analysis (PCA)
263(4)
Partial Least-Squares Regression
267(1)
Outlier Detection
268(2)
Noise
270(1)
Case Study: Illustrating Input Selection and Dimensionality Reduction for a Practical Problem
270(10)
Data Preprocessing and Preliminary Modeling
271(4)
PCA-Based Neural Network Modeling
275(3)
Effect of Hidden Neurons for Non-PCA- and PCA-Based Approaches
278(1)
Case Study Summary
279(1)
Summary
280(3)
Problems
281(1)
References
281(2)
Assessment of Uncertainty of Neural Network Models Using Bayesian Statistics
283(54)
Introduction and Overview
283(2)
Estimating Weight Uncertainty Using Bayesian Statistics
285(15)
Quality Criterion
285(3)
Incorporating Bayesian Statistics to Estimate Weight Uncertainty
288(1)
Square Error
289(3)
Intrinsic Uncertainty of Targets for Multivariate Output
292(1)
Probability Density Function of Weights
293(2)
Example Illustrating Generation of Probability Distribution of Weights
295(1)
Estimation of Geophysical Parameters from Remote Sensing: A Case Study
295(5)
Assessing Uncertainty of Neural Network Outputs Using Bayesian Statistics
300(11)
Example Illustrating Uncertainty Assessment of Output Errors
301(1)
Total Network Output Errors
301(1)
Error Correlation and Covariance Matrices
302(1)
Statistical Analysis of Error Covariance
302(2)
Decomposition of Total Output Error into Model Error and Intrinsic Noise
304(7)
Assessing the Sensitivity of Network Outputs to Inputs
311(22)
Approaches to Determine the Influence of Inputs on Outputs in Feedforward Networks
311(1)
Methods Based on Magnitude of Weights
311(1)
Sensitivity Analysis
312(1)
Example: Comparison of Methods to Assess the Influence of Inputs on Outputs
313(1)
Uncertainty of Sensitivities
314(1)
Example Illustrating Uncertainty Assessment of Network Sensitivity to Inputs
315(1)
PCA Decomposition of Inputs and Outputs
315(5)
PCA-Based Neural Network Regression
320(3)
Neural Network Sensitivities
323(2)
Uncertainty of Input Sensitivity
325(3)
PCA-Regularized Jacobians
328(5)
Case Study Summary
333(1)
Summary
333(4)
Problems
334(1)
References
335(2)
Discovering Unknown Clusters in Data with Self-Organizing Maps
337(100)
Introduction and Overview
337(1)
Structure of Unsupervised Networks
338(1)
Learning in Unsupervised Networks
339(1)
Implementation of Competitive Learning
340(9)
Winner Selection Based on Neuron Activation
340(1)
Winner Selection Based on Distance to Input Vector
341(1)
Other Distance Measures
342(1)
Competitive Learning Example
343(1)
Recursive Versus Batch Learning
344(1)
Illustration of the Calculations Involved in Winner Selection
344(2)
Network Training
346(3)
Self-Organizing Feature Maps
349(22)
Learning in Self-Organizing Map Networks
349(1)
Selection of Neighborhood Geometry
349(1)
Training of Self-Organizing Maps
350(1)
Neighbor Strength
350(1)
Example: Training Self-Organizing Networks with a Neighbor Feature
351(3)
Neighbor Matrix and Distance to Neighbors from the Winner
354(3)
Shrinking Neighborhood Size with Iterations
357(1)
Learning Rate Decay
358(1)
Weight Update Incorporating Learning Rate and Neighborhood Decay
359(1)
Recursive and Batch Training and Relation to K-Means Clustering
360(1)
Two Phases of Self-Organizing Map Training
360(1)
Example: Illustrating Self-Organizing Map Learning with a Hand Calculation
361(7)
SOM Case Study: Determination of Mastitis Health Status of Dairy Herd from Combined Milk Traits
368(3)
Example of Two-Dimensional Self-Organizing Maps: Clustering Canadian and Alaskan Salmon Based on the Diameter of Growth Rings of the Scales
371(11)
Map Structure and Initialization
372(1)
Map Training
373(7)
U-Matrix
380(2)
Map Initialization
382(29)
Example: Training Two-Dimensional Maps on Multidimensional Data
382(1)
Data Visualization
383(1)
Map Structure and Training
383(6)
U-Matrix
389(1)
Point Estimates of Probability Density of Inputs Captured by the Map
390(1)
Quantization Error
391(2)
Accuracy of Retrieval of Input Data from the Map
393(2)
Forming Clusters on the Map
395(1)
Approaches to Clustering
396(1)
Example Illustrating Clustering on a Trained Map
397(4)
Finding Optimum Clusters on the Map with the Ward Method
401(2)
Finding Optimum Clusters by K-Means Clustering
403(3)
Validation of a Trained Map
406(1)
n-Fold Cross Validation
406(5)
Evolving Self-Organizing Maps
411(20)
Growing Cell Structure of Map
413(3)
Centroid Method for Mapping Input Data onto Positions between Neurons on the Map
416(3)
Dynamic Self-Organizing Maps with Controlled Growth (GSOM)
419(3)
Example: Application of Dynamic Self-Organizing Maps
422(5)
Evolving Tree
427(4)
Summary
431(6)
Problems
432(2)
References
434(3)
Neural Networks for Time-Series Forecasting
437(118)
Introduction and Overview
437(3)
Linear Forecasting of Time-Series with Statistical and Neural Network Models
440(6)
Example Case Study: Regulating Temperature of a Furnace
442(2)
Multistep-Ahead Linear Forecasting
444(2)
Neural Networks for Nonlinear Time-Series Forecasting
446(22)
Focused Time-Lagged and Dynamically Driven Recurrent Networks
446(2)
Focused Time-Lagged Feedforward Networks
448(2)
Spatio-Temporal Time-Lagged Networks
450(2)
Example: Spatio-Temporal Time-Lagged Network---Regulating Temperature in a Furnace
452(2)
Single-Step Forecasting with Neural NARx Model
454(1)
Multistep Forecasting with Neural NARx Model
455(2)
Case Study: River Flow Forecasting
457(3)
Linear Model for River Flow Forecasting
460(3)
Nonlinear Neural (NARx) Model for River Flow Forecasting
463(4)
Input Sensitivity
467(1)
Hybrid Linear (ARIMA) and Nonlinear Neural Network Models
468(3)
Case Study: Forecasting the Annual Number of Sunspots
470(1)
Automatic Generation of Network Structure Using Simplest Structure Concept
471(4)
Case Study: Forecasting Air Pollution with Automatic Neural Network Model Generation
473(2)
Generalized Neuron Network
475(10)
Case Study: Short-Term Load Forecasting with a Generalized Neuron Network
482(3)
Dynamically Driven Recurrent Networks
485(34)
Recurrent Networks with Hidden Neuron Feedback
485(1)
Encapsulating Long-Term Memory
485(3)
Structure and Operation of the Elman Network
488(2)
Training Recurrent Networks
490(5)
Network Training Example: Hand Calculation
495(5)
Recurrent Learning Network Application Case Study: Rainfall Runoff Modeling
500(3)
Two-Step-Ahead Forecasting with Recurrent Networks
503(2)
Real-Time Recurrent Learning Case Study: Two-Step-Ahead Stream Flow Forecasting
505(3)
Recurrent Networks with Output Feedback
508(1)
Encapsulating Long-Term Memory in Recurrent Networks with Output Feedback
508(2)
Application of a Recurrent Net with Output and Error Feedback and Exogenous Inputs: (NARIMAx) Case Study: Short-Term Temperature Forecasting
510(3)
Training of Recurrent Nets with Output Feedback
513(2)
Fully Recurrent Network
515(2)
Fully Recurrent Network Practical Application Case Study: Short-Term Electricity Load Forecasting
517(2)
Bias and Variance in Time-Series Forecasting
519(9)
Decomposition of Total Error into Bias and Variance Components
521(1)
Example Illustrating Bias--Variance Decomposition
522(6)
Long-Term Forecasting
528(5)
Case Study: Long-Term Forecasting with Multiple Neural Networks (MNNs)
531(2)
Input Selection for Time-Series Forecasting
533(16)
Input Selection from Nonlinearly Dependent Variables
535(1)
Partial Mutual Information Method
535(3)
Generalized Regression Neural Network
538(1)
Self-Organizing Maps for Input Selection
539(2)
Genetic Algorithms for Input Selection
541(2)
Practical Application of Input Selection Methods for Time-Series Forecasting
543(3)
Input Selection Case Study: Selecting Inputs for Forecasting River Salinity
546(3)
Summary
549(6)
Problems
551(1)
References
552(3)
Appendix 555(6)
Index 561

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program