rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780201361865

Mathematical Methods and Algorithms for Signal Processing

by Moon, Todd K.; Stirling, Wynn C.
  • ISBN13:

    9780201361865

  • ISBN10:

    0201361868

  • Edition: 1st
  • Format: Paperback
  • Copyright: 1999-08-04
  • Publisher: Pearson
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $246.65 Save up to $1.23
  • Buy New
    $245.42
    Add to Cart Free Shipping Icon Free Shipping

    PRINT ON DEMAND: 2-4 WEEKS. THIS ITEM CANNOT BE CANCELLED OR RETURNED.

Summary

This textbook bridges the gap between introductory signal processing classes and the mathematics prevalent in contemporary signal processing research and practice. Moon (Utah State University) and Stirling (Brigham Young) treat linear algebra, statistical signal processing, iterative algorithms, and optimization. The CD-ROM contains algorithms and exercises written in MATLAB.

Author Biography

Todd K. Moon is currently with the Electrical and Computer Engineering department at Utah State University. Wynn C. Stirling is a professor of electrical engineering at Brigham Young University. He received his Ph.D. in electrical engineering from Stanford University.

Table of Contents

I Introduction and Foundations 1(68)
1 Introduction and Foundations
3(66)
1.1 What is signal processing?
3(2)
1.2 Mathematical topics embraced by signal processing
5(1)
1.3 Mathematical models
6(1)
1.4 Models for linear systems and signals
7(21)
1.4.1 Linear discrete-time models
7(5)
1.4.2 Stochastic MA and AR models
12(8)
1.4.3 Continuous-time notation
20(1)
1.4.4 Issues and applications
21(5)
1.4.5 Identification of the modes
26(2)
1.4.6 Control of the modes
28(1)
1.5 Adaptive filtering
28(3)
1.5.1 System identification
29(1)
1.5.2 Inverse system identification
29(1)
1.5.3 Adaptive predictors
29(1)
1.5.4 Interference cancellation
30(1)
1.6 Gaussian random variables and random processes
31(6)
1.6.1 Conditional Gaussian densities
36(1)
1.7 Markov and Hidden Markov Models
37(4)
1.7.1 Markov models
37(2)
1.7.2 Hidden Markov models
39(2)
1.8 Some aspects of proofs
41(7)
1.8.1 Proof by computation: direct proof
43(2)
1.8.2 Proof by contradiction
45(1)
1.8.3 Proof by induction
46(2)
1.9 An application: LFSRs and Massey's algorithm
48(10)
1.9.1 Issues and applications of LFSRs
50(2)
1.9.2 Massey's algorithm
52(1)
1.9.3 Characterization of LFSR length in Massey's algorithm
53(5)
1.10 Exercises
58(9)
1.11 References
67(2)
II Vector Spaces and Linear Algebra 69(366)
2 Signal Spaces
71(59)
2.1 Metric spaces
72(12)
2.1.1 Some topological terms
76(2)
2.1.2 Sequences, Cauchy sequences, and completeness
78(4)
2.1.3 Technicalities associated with the L(p) and L(Infinity) spaces
82(2)
2.2 Vector spaces
84(9)
2.2.1 Linear combinations of vectors
87(1)
2.2.2 Linear independence
88(2)
2.2.3 Basis and dimension
90(3)
2.2.4 Finite-dimensional vector spaces and matrix notation
93(1)
2.3 Norms and normed vector spaces
93(4)
2.3.1 Finite-dimensional normed linear spaces
97(1)
2.4 Inner products and inner-product spaces
97(2)
2.4.1 Weak convergence
99(1)
2.5 Induced norms
99(1)
2.6 The Cauchy-Schwarz inequality
100(1)
2.7 Direction of vectors: Orthogonality
101(2)
2.8 Weighted inner products
103(3)
2.8.1 Expectation as an inner product
105(1)
2.9 Hilbert and Banach spaces
106(1)
2.10 Orthogonal subspaces
107(1)
2.11 Linear transformations: Range and nullspace
108(2)
2.12 Inner-sum and direct-sum spaces
110(3)
2.13 Projections and orthogonal projections
113(3)
2.13.1 Projection matrices
115(1)
2.14 The projection theorem
116(2)
2.15 Orthogonalization of vectors
118(3)
2.16 Some final technicalities for infinite dimensional spaces
121(1)
2.17 Exercises
121(8)
2.18 References
129(1)
3 Representation and Approximation in Vector Spaces
130(99)
3.1 The Approximation problem in Hilbert space
130(5)
3.1.1 The Grammian matrix
133(2)
3.2 The Orthogonality principle
135(2)
3.2.1 Representations in infinite-dimensional space
136(1)
3.3 Error minimization via gradients
137(1)
3.4 Matrix Representations of least-squares problems
138(3)
3.4.1 Weighted least-squares
140(1)
3.4.2 Statistical properties of the least-squares estimate
140(1)
3.5 Minimum error in Hilbert-space approximations
141(2)
Applications of the orthogonality theorem
3.6 Approximation by continuous polynomials
143(2)
3.7 Approximation by discrete polynomials
145(2)
3.8 Linear regression
147(2)
3.9 Least-squares filtering
149(7)
3.9.1 Least-squares prediction and AR spectrum estimation
154(2)
3.10 Minimum mean-square estimation
156(1)
3.11 Minimum mean-squared error (MMSE) filtering
157(4)
3.12 Comparison of least squares and minimum mean squares
161(1)
3.13 Frequency-domain optimal filtering
162(17)
3.13.1 Brief review of stochastic processes and Laplace transforms
162(3)
3.13.2 Two-sided Laplace transforms and their decompositions
165(4)
3.13.3 The Wiener-Hopf equation
169(2)
3.13.4 Solution to the Wiener-Hopf equation
171(3)
3.13.5 Examples of Wiener filtering
174(2)
3.13.6 Mean-square error
176(1)
3.13.7 Discrete-time Wiener filters
176(3)
3.14 A dual approximation problem
179(3)
3.l5 Minimum-norm solution of underdetermined equations
182(1)
3.16 Iterative Reweighted LS (IRLS) for L(p) optimization
183(3)
3.17 Signal transformation and generalized Fourier series
186(4)
3.18 Sets of complete orthogonal functions
190(18)
3.18.1 Trigonometric functions
190(1)
3.18.2 Orthogonal polynomials
190(3)
3.18.3 Sinc functions
193(1)
3.18.4 Orthogonal wavelets
194(14)
3.19 Signals as points: Digital communications
208(7)
3.19.1 The detection problem
210(2)
3.19.2 Examples of basis functions used in digital communications
212(1)
3.19.3 Detection in nonwhite noise
213(2)
3.20 Exercises
215(13)
3.21 References
228(1)
4 Linear Operators and Matrix Inverses
229(46)
4.1 Linear operators
230(2)
4.1.1 Linear functionals
231(1)
4.2 Operator norms
232(5)
4.2.1 Bounded operators
233(2)
4.2.2 The Neumann expansion
235(1)
4.2.3 Matrix norms
235(2)
4.3 Adjoint operators and transposes
237(2)
4.3.1 A dual optimization problem
239(1)
4.4 Geometry of linear equations
239(3)
4.5 Four fundamental subspaces of a linear operator
242(5)
4.5.1 The four fundamental subspaces with non-closed range
246(1)
4.6 Some properties of matrix inverses
247(2)
4.6.1 Tests for invertibility of matrices
248(1)
4.7 Some results on matrix rank
249(2)
4.7.1 Numeric rank
250(1)
4.8 Another look at least squares
251(1)
4.9 Pseudoinverses
251(2)
4.10 Matrix condition number
253(5)
4.11 Inverse of a small-rank adjustment
258(6)
4.11.1 An application: the RLS filter
259(2)
4.11.2 Two RLS applications
261(3)
4.12 Inverse of a block (partitioned) matrix
264(4)
4.12.1 Application: Linear models
267(1)
4.13 Exercises
268(6)
4.14 References
274(1)
5 Some Important Matrix Factorizations
275(30)
5.1 The LU factorization
275(8)
5.1.1 Computing the determinant using the LU factorization
277(1)
5.1.2 Computing the LU factorization
278(5)
5.2 The Cholesky factorization
283(2)
5.2.1 Algorithms for computing the Cholesky factorization
284(1)
5.3 Unitary matrices and the QR factorization
285(15)
5.3.1 Unitary matrices
285(1)
5.3.2 The QR factorization
286(1)
5.3.3 QR factorization and least-squares filters
286(1)
5.3.4 Computing the QR factorization
287(1)
5.3.5 Householder transformations
287(4)
5.3.6 Algorithms for Householder transformations
291(2)
5.3.7 QR factorization using Givens rotations
293(2)
5.3.8 Algorithms for QR factorization using Givens rotations
295(1)
5.3.9 Solving least-squares problems using Givens rotations
296(1)
5.3.10 Givens rotations via CORDIC rotations
297(2)
5.3.11 Recursive updates to the QR factorization
299(1)
5.4 Exercises
300(4)
5.5 References
304(1)
6 Eigenvalues and Eigenvectors
305(64)
6.1 Eigenvalues and linear systems
305(3)
6.2 Linear dependence of eigenvectors
308(1)
6.3 Diagonalization of a matrix
309(7)
6.3.1 The Jordan form
311(1)
6.3.2 Diagonalization of self-adjoint matrices
312(4)
6.4 Geometry of invariant subspaces
316(2)
6.5 Geometry of quadratic forms and the minimax principle
318(6)
6.6 Extremal quadratic forms subject to linear constraints
324(1)
6.7 The Gershgorin circle theorem
324(3)
Application of Eigendecomposition methods
6.8 Karhunen-Loeve low-rank approximations and principal methods
327(3)
6.8.1 Principal component methods
329(1)
6.9 Eigenfilters
330(6)
6.9.1 Eigenfilters for random signals
330(2)
6.9.2 Eigenfilter for designed spectral response
332(2)
6.9.3 Constrained eigenfilters
334(2)
6.10 Signal subspace techniques
336(4)
6.10.1 The signal model
336(1)
6.10.2 The noise model
337(1)
6.10.3 Pisarenko harmonic decomposition
338(1)
6.10.4 MUSIC
339(1)
6.11 Generalized eigenvalues
340(2)
6.11.1 An application: ESPRIT
341(1)
6.12 Characteristic and minimal polynomials
342(2)
6.12.1 Matrix polynomials
342(2)
6.12.2 Minimal polynomials
344(1)
6.13 Moving the eigenvalues around: Introduction to linear control
344(3)
6.14 Noiseless constrained channel capacity
347(3)
6.15 Computation of eigenvalues and eigenvectors
350(5)
6.15.1 Computing the largest and smallest eigenvalues
350(1)
6.15.2 Computing the eigenvalues of a symmetric matrix
351(1)
6.15.3 The QR iteration
352(3)
6.16 Exercises
355(13)
6.17 References
368(1)
7 The Singular Value Decomposition
369(27)
7.1 Theory of the SVD
369(3)
7.2 Matrix structure from the SVD
372(1)
7.3 Pseudoinverses and the SVD
373(2)
7.4 Numerically sensitive problems
375(2)
7.5 Rank-reducing approximations: Effective rank
377(1)
Applications of the SVD
7.6 System identification using the SVD
378(3)
7.7 Total least-squares problems
381(5)
7.7.1 Geometric interpretation of the TLS solution
385(1)
7.8 Partial total least squares
386(3)
7.9 Rotation of subspaces
389(1)
7.10 Computation of the SVD
390(2)
7.11 Exercises
392(3)
7.12 References
395(1)
8 Some Special Matrices and Their Applications
396(26)
8.1 Modal matrices and parameter estimation
396(3)
8.2 Permutation matrices
399(1)
8.3 Toeplitz matrices and some applications
400(9)
8.3.1 Durbin's algorithm
402(1)
8.3.2 Predictors and lattice filters
403(4)
8.3.3 Optimal predictors and Toeplitz inverses
407(1)
8.3.4 Toeplitz equations with a general right-hand side
408(1)
8.4 Vandermonde matrices
409(1)
8.5 Circulant matrices
410(6)
8.5.1 Relations among Vandermonde, circulant, and companion matrices
412(1)
8.5.2 Asymptotic equivalence of the eigenvalues of Toeplitz and circulant matrices
413(3)
8.6 Triangular matrices
416(1)
8.7 Properties preserved in matrix products
417(1)
8.8 Exercises
418(3)
8.9 References
421(1)
9 Kronecker Products and the Vec Operator
422(13)
9.1 The Kronecker product and Kronecker sum
422(3)
9.2 Some applications of Kronecker products
425(3)
9.2.1 Fast Hadamard transforms
425(1)
9.2.2 DFT computation using Kronecker products
426(2)
9.3 The vec operator
428(3)
9.4 Exercises
431(2)
9.5 References
433(2)
III Detection, Estimation, and Optimal Filtering 435(186)
10 Introduction to Detection and Estimation, and Mathematical Notation
437(23)
10.1 Detection and estimation theory
437(5)
10.1.1 Game theory and decision theory
438(2)
10.1.2 Randomization
440(1)
10.1.3 Special cases
441(1)
10.2 Some notational conventions
442(2)
10.2.1 Populations and statistics
443(1)
10.3 Conditional expectation
444(1)
10.4 Transformations of random variables
445(1)
10.5 Sufficient statistics
446(7)
10.5.1 Examples of sufficient statistics
450(1)
10.5.2 Complete sufficient statistics
451(2)
10.6 Exponential families
453(3)
10.7 Exercises
456(3)
10.8 References
459(1)
11 Detection Theory
460(82)
11.1 Introduction to hypothesis testing
460(2)
11.2 Neyman-Pearson theory
462(21)
11.2.1 Simple binary hypothesis testing
462(1)
11.2.2 The Neyman-Pearson lemma
463(3)
11.2.3 Application of the Neyman-Pearson lemma
466(1)
11.2.4 The likelihood ratio and the receiver operating characteristic (ROC)
467(1)
11.2.5 A Poisson example
468(1)
11.2.6 Some Gaussian examples
469(11)
11.2.7 Properties of the ROC
480(3)
11.3 Neyman-Pearson testing with composite binary hypotheses
483(2)
11.4 Bayes decision theory
485(14)
11.4.1 The Bayes principle
486(1)
11.4.2 The risk function
487(2)
11.4.3 Bayes risk
489(1)
11.4.4 Bayes tests of simple binary hypotheses
490(4)
11.4.5 Posterior distributions
494(4)
11.4.6 Detection and sufficiency
498(1)
11.4.7 Summary of binary decision problems
498(1)
11.5 Some M-ary problems
499(4)
11.6 Maximum-likelihood detection
503(1)
11.7 Approximations to detection performance: The union bound
503(1)
11.8 Invariant Tests
504(8)
11.8.1 Detection with random (nuisance) parameters
507(5)
11.9 Detection in continuous time
512(8)
11.9.1 Some extensions and precautions
516(4)
11.10 Minimax Bayes decisions
520(12)
11.10.1 Bayes envelope function
520(3)
11.10.2 Minimax rules
523(1)
11.10.3 Minimax Bayes in multiple-decision problems
524(4)
11.10.4 Determining the least favorable prior
528(1)
11.10.5 A minimax example and the minimax theorem
529(3)
11.11 Exercises
532(9)
11.12 References
541(1)
12 Estimation Theory
542(49)
12.1 The maximum-likelihood principle
542(5)
12.2 ML estimates and sufficiency
547(1)
12.3 Estimation quality
548(13)
12.3.1 The score function
548(2)
12.3.2 The Cramer-Rao lower bound
550(2)
12.3.3 Efficiency
552(1)
12.3.4 Asymptotic properties of maximum-likelihood estimators
553(3)
12.3.5 The multivariate normal case
556(3)
12.3.6 Minimum-variance unbiased estimators
559(2)
12.3.7 The linear statistical model
561(1)
12.4 Applications of ML estimation
561(7)
12.4.1 ARMA parameter estimation
561(4)
12.4.2 Signal subspace identification
565(1)
12.4.3 Phase estimation
566(2)
12.5 Bayes estimation theory
568(1)
12.6 Bayes risk
569(11)
12.6.1 MAP estimates
573(1)
12.6.2 Summary
574(1)
12.6.3 Conjugate prior distributions
574(3)
12.6.4 Connections with minimum mean-squared estimation
577(1)
12.6.5 Bayes estimation with the Gaussian distribution
578(2)
12.7 Recursive estimation
580(4)
12.7.1 An example of non-Gaussian recursive Bayes
582(2)
12.8 Exercises
584(6)
12.9 References
590(1)
13 The Kalman Filter
591(30)
13.1 The state-space signal model
591(1)
13.2 Kalman filter I: The Bayes approach
592(3)
13.3 Kalman filter I: The innovations approach
595(9)
13.3.1 Innovations for processes with linear observation models
596(1)
13.3.2 Estimation using the innovations process
597(1)
13.3.3 Innovations for processes with state-space models
598(1)
13.3.4 A recursion for P(t|t-1)
599(2)
13.3.5 The discrete-time Kalman filter
601(1)
13.3.6 Perspective
602(1)
13.3.7 Comparison with the RLS adaptive filter algorithm
603(1)
13.4 Numerical considerations: Square-root filters
604(2)
13.5 Application in continuous-time systems
606(1)
13.5.1 Conversion from continuous time to discrete time
606(1)
13.5.2 A simple kinematic example
606(1)
13.6 Extensions of Kalman filtering to nonlinear systems
607(6)
13.7 Smoothing
613(3)
13.7.1 The Rauch-Tung-Streibel fixed-interval smoother
613(3)
13.8 Another approach: H(Infinity) smoothing
616(1)
13.9 Exercises
617(3)
13.10 References
620(1)
IV Iterative and Recursive Methods in Signal Processing 621(128)
14 Basic Concepts and Methods of Iterative Algorithms
623(47)
14.1 Definitions and qualitative properties of iterated functions
624(5)
14.1.1 Basic theorems of iterated functions
626(1)
14.1.2 Illustration of the basic theorems
627(2)
14.2 Contraction mappings
629(2)
14.3 Rates of convergence for iterative algorithms
631(1)
14.4 Newton's method
632(5)
14.5 Steepest descent
637(6)
14.5.1 Comparison and discussion: Other techniques
642(1)
Some Applications of Basic Iterative Methods
14.6 LMS adaptive Filtering
643(5)
14.6.1 An example LMS application
645(1)
14.6.2 Convergence of the LMS algorithm
646(2)
14.7 Neural networks
648(12)
14.7.1 The backpropagation training algorithm
650(3)
14.7.2 The nonlinearity function
653(1)
14.7.3 The forward-backward training algorithm
654(1)
14.7.4 Adding a momentum term
654(1)
14.7.5 Neural network code
655(3)
14.7.6 How many neurons?
658(1)
14.7.7 Pattern recognition: ML or NN?
659(1)
14.8 Blind source separation
660(5)
14.8.1 A bit of information theory
660(2)
14.8.2 Applications to source separation
662(2)
14.8.3 Implementation aspects
664(1)
14.9 Exercises
665(3)
14.10 References
668(2)
15 Iteration by Composition of Mappings
670(25)
15.1 Introduction
670(1)
15.2 Alternating projections
671(5)
15.2.1 An applications: bandlimited reconstruction
675(1)
15.3 Composite mappings
676(1)
15.4 Closed mappings and the global convergence theorem
677(3)
15.5 The composite mapping algorithm
680(9)
15.5.1 Bandlimited reconstruction, revisited
681(1)
15.5.2 An example: Positive sequence determination
681(2)
15.5.3 Matrix property mappings
683(6)
15.6 Projection on convex sets
689(4)
15.7 Exercises
693(1)
15.8 References
694(1)
16 Other Iterative Algorithms
695(22)
16.1 Clustering
695(6)
16.1.1 An example application: Vector quantization
695(2)
16.1.2 An example application: Pattern recognition
697(1)
16.1.3 k -means Clustering
698(2)
16.1.4 Clustering using fuzzy k-means
700(1)
16.2 Iterative methods for computing inverses of matrices
701(5)
16.2.1 The Jacobi method
702(1)
16.2.2 Gauss-Seidel iteration
703(2)
16.2.3 Successive over-relaxation (SOR)
705(1)
16.3 Algebraic reconstruction techniques (ART)
706(2)
16.4 Conjugate-direction methods
708(2)
16.5 Conjugate-gradient method
710(3)
16.6 Nonquadratic problems
713(1)
16.7 Exercises
713(2)
16.8 References
715(2)
17 The EM Algorithm in Signal Processing
717(32)
17.1 An introductory example
718(3)
17.2 General statement of the EM algorithm
721(2)
17.3 Convergence of the EM algorithm
723(2)
17.3.1 Convergence rate: Some generalizations
724(1)
Example applications of the EM algorithm
17.4 Introductory example, revisited
725(1)
17.5 Emission computed tomography (ECT) image reconstruction
725(4)
17.6 Active noise cancellation (ANC)
729(3)
17.7 Hidden Markov models
732(8)
17.7.1 The E- and M-steps
734(1)
17.7.2 The forward and backward probabilities
735(1)
17.7.3 Discrete output densities
736(1)
17.7.4 Gaussian output densities
736(1)
17.7.5 Normalization
737(1)
17.7.6 Algorithms for HMMs
738(2)
17.8 Spread-spectrum, multiuser communication
740(3)
17.9 Summary
743(1)
17.10 Exercises
744(3)
17.11 References
747(2)
V Methods of Optimization 749(106)
18 Theory of Constrained Optimization
751(36)
18.1 Basic definitions
751(4)
18.2 Generalization of the chain rule to composite functions
755(2)
18.3 Definitions for constrained optimization
757(1)
18.4 Equality constraints: Lagrange multipliers
758(9)
18.4.1 Examples of equality-constrained optimization
764(3)
18.5 Second-order conditions
767(3)
18.6 Interpretation of the Lagrange multipliers
770(3)
18.7 Complex constraints
773(1)
18.8 Duality in optimization
773(4)
18.9 Inequality constraints: Kuhn-Tucker conditions
777(7)
18.9.1 Second-order conditions for inequality constraints
783(1)
18.9.2 An extension: Fritz John conditions
783(1)
18.10 Exercises
784(2)
18.11 References
786(1)
19 Shortest-Path Algorithms and Dynamic Programming
787(31)
19.1 Definitions for graphs
787(2)
19.2 Dynamic programming
789(2)
19.3 The Viterbi algorithm
791(4)
19.4 Code for the Viterbi algorithm
795(5)
19.4.1 Related algorithms: Dijkstra's and Warshall's
798(1)
19.4.2 Complexity comparisons of Viterbi and Dijkstra
799(1)
Applications of path search algorithms
19.5 Maximum-likelihood sequence estimation
800(8)
19.5.1 The intersymbol interference (ISI) channel
800(4)
19.5.2 Code-division multiple access
804(2)
19.5.3 Convolutional decoding
806(2)
19.6 HMM likelihood analysis and HMM training
808(5)
19.6.1 Dynamic warping
811(2)
19.7 Alternatives to shortest-path algorithms
813(2)
19.8 Exercises
815(2)
19.9 References
817(1)
20 Linear Programming
818(37)
20.1 Introduction to linear programming
818(1)
20.2 Putting a problem into standard form
819(4)
20.2.1 Inequality constraints and slack variables
819(1)
20.2.2 Free variables
820(2)
20.2.3 Variable-bound constraints
822(1)
20.2.4 Absolute value in the objective
823(1)
20.3 Simple examples of linear programming
823(1)
20.4 Computation of the linear programming solution
824(12)
20.4.1 Basic variables
824(2)
20.4.2 Pivoting
826(2)
20.4.3 Selecting variables on which to pivot
828(1)
20.4.4 The effect of pivoting on the value of the problem
829(1)
20.4.5 Summary of the simplex algorithm
830(1)
20.4.6 Finding the initial basic feasible solution
831(3)
20.4.7 MATLAB(R) code for linear programming
834(1)
20.4.8 Matrix notation for the simplex algorithm
835(1)
20.5 Dual problems
836(2)
20.6 Karmarker's algorithm for LP
838(8)
20.6.1 Conversion to Karmarker standard form
842(2)
20.6.2 Convergence of the algorithm
844(2)
20.6.3 Summary and extensions
846(1)
Examples and applications of linear programming
20.7 Linear-phase FIR filter design
846(3)
20.7.1 Least-absolute-error approximation
847(2)
20.8 Linear optimal control
849(1)
20.9 Exercises
850(3)
20.10 References
853(2)
A Basic Concepts and Definitions
855(22)
A.1 Set theory and notation
855(4)
A.2 Mappings and functions
859(1)
A.3 Convex functions
860(1)
A.4 O and o Notation
861(1)
A.5 Continuity
862(2)
A.6 Differentiation
864(5)
A.6.1 Differentiation with a single real variable
864(1)
A.6.2 Partial derivatives and gradients on R^(m)
865(2)
A.6.3 Linear approximation using the gradient
867(1)
A.6.4 Taylor series
868(1)
A.7 Basic constrained optimization
869(1)
A.8 The Holder and Minkowski inequalities
870(1)
A.9 Exercises
871(5)
A.10 References
876(1)
B Completing the Square
877(3)
B.1 The scalar case
877(2)
B.2 The matrix case
879(1)
B.3 Exercises
879(1)
C Basic Matrix Concepts
880(11)
C.1 Notational conventions
880(2)
C.2 Matrix Identity and Inverse
882(1)
C.3 Transpose and trace
883(2)
C.4 Block (partitioned) matrices
885(1)
C.5 Determinants
885(4)
C.5.1 Basic properties of determinants
885(2)
C.5.2 Formulas for the determinant
887(2)
C.5.3 Determinants and matrix inverses
889(1)
C.6 Exercises
889(1)
C.7 References
890(1)
D Random Processes
891(5)
D.1 Definitions of means and correlations
891(1)
D.2 Stationarity
892(1)
D.3 Power spectral-density functions
893(1)
D.4 Linear systems with stochastic inputs
894(1)
D.4.1 Continuous-time signals and systems
894(1)
D.4.2 Discrete-time signals and systems
895(1)
D.5 References
895(1)
E Derivatives and Gradients
896(17)
E.1 Derivatives of vectors and scalars with respect to a real vector
896(3)
E.1.1 Some important gradients
897(2)
E.2 Derivatives of real-valued functions of real matrices
899(2)
E.3 Derivatives of matrices with respect to scalars, and vice versa
901(2)
E.4 The transformation principle
903(1)
E.5 Derivatives of products of matrices
903(1)
E.6 Derivatives of powers of a matrix
904(2)
E.7 Derivatives involving the trace
906(2)
E.8 Modifications for derivatives of complex vectors and matrices
908(2)
E.9 Exercises
910(2)
E.10 References
912(1)
F Conditional Expectations of Multinomial and Poisson r.v.s
913(2)
F.1 Multinomial distributions
913(1)
F.2 Poisson random variables
914(1)
F.3 Exercises
914(1)
Bibliography 915(14)
Index 929

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

Preface Rationale The purpose of this book is to bridge the gap between introductory signal processing classes and the mathematics prevalent in contemporary signal processing research and practice, by providing a unifiedapplied treatment of fundamental mathematics, seasoned with demonstrations using MATLAB. This book is intended not only for current students of signal processing, but also for practicing engineers who must be able to access the signal processing research literature, and for researchers looking for a particular result to apply. It is thus intended both as a textbook andas a reference. Both the theory and the practice of signal processing contribute to and draw from a variety of disciplines: controls, communications, system identification, information theory, artificial intelligence, spectroscopy, pattern recognition, tomography, image analysis, and data acquisition, among others. To fulfill its role in these diverse areas, signal processing employs a variety of mathematical tools, including transform theory, probability, optimization, detection theory, estimation theory, numerical analysis, linear algebra, functional analysis, and many others. The practitioner of signal processing-- the "signal processor"-- may use several of these tools in the solution of a problem; for example, setting up a signal reconstruction algorithm, and then optimizing the parameters of the algorithm for optimum performance. Practicing signal processors must have knowledge of both the theoryand the implementationof the mathematics: how and why it works, and how to make the computer do it. The breadth of mathematics employed in signal processing, coupled with the opportunity to apply that math to problems of engineering interest, makes the field both interesting and rewarding. The mathematical aspects of signal processing also introduce some of its major challenges: how is a student or engineering practitioner to become versed in such a variety of mathematical techniques while still keeping an eye toward applications? Introductory texts on signal processing tend to focus heavily on transform techniques and filter-based applications. While this is an essential part of the training of a signal processor, it is only the tip of the iceberg of material required by a practicing engineer. On the other hand, more advanced texts typically develop mathematical tools that are specific to a narrow aspect of signal processing, while perhaps missing connections between these ideas and related areas of research. Neither of these approaches provides sufficient background to read and understand broadly in the signal processing research literature, nor do they equip the student with many signal processing tools. The signal processing literature has moved steadily toward increasing sophistication: applications of the singular value decomposition (SVD) and wavelet transforms abound; everyone knows something about these by now, or should! Part of this move toward sophistication is fueled by computer capabilities, since computations that formerly required considerable effort and understanding are now embodied in convenient mathematical packages. A naive view might held that this automation threatens the expertise of the engineer: Why hire a specialist to do what anyone can do in ten minutes with a MATLAB toolbox? Viewed more positively, the power of the computer provides a variety of new opportunities, as engineers are freed from computational drudgery to pursue new applications. Computer software provides platforms upon which innovative ideas may be developed with ever greater ease. Taking advantage of this new freedom to develop useful concepts will require a solid understanding of mathematics, both to appreciate what is in the toolboxes and to extend beyond their limits. This book is intended to provide a foundation in the requisite mathematics. We assume that students using this text have had a course in traditional transform-based digital signal processing at the senior or first-year graduate level, and a traditional course in stochastic processes. Though basic concepts in these areas are reviewed, this book does not supplant the more focused coverage that these courses provide. Features Vector-space geometry, which puts least-squares and minimum mean-squares in the same framework, and the concept of signals as vectors in an appropriate vector space, are both emphasized. This vector-space approach provides a natural framework for topics such as wavelet transforms and digital communications, as well as the traditional topics of optimum prediction, filtering, and estimation. In this context, the more general notion of metric spaces is introduced, with a discussion of signal norms. The linear algebra used in signal processing is thoroughly described, both in concept and in numerical implementation. While software libraries are commonly available to perform linear algebra computations, we feel that the numerical techniques presented in this book exercise student intuition regarding the geometry of vector spaces, and build understanding of the issues that must be addressed in practical problems. The presentation includes a thorough discussion of eigen-based methods of computation, including eigenfilters, MUSIC, and ESPRIT; there is also a chapter devoted to the properties and applications of the SVD. Toeplitz matrices, which appear throughout the signal processing literature, are treated both from a numerical point of view-- as an example of recursive algorithms-- and in conjunction with the lattice-filtering interpretation. The matrices in linear algebra are viewed as operators; thus, the important concept of an operator is introduced. Associated notions, such as the range, nullspace, and norm of an operator are also presented. While a full coverage of operator theory is not provided, there is a strong foundation that can serve to build insight into other operators. In addition to linear algebraic concepts, there is a discussion of computation. Algorithms are presented for computing the common factorizations, eigenvalues, eigenvectors, SVDs, and many other problems, with some numerical consideration for implementation. Not all of this material is necessarily intended for classroom use in a conventional signal processing course-- there will not be sufficient time in most cases. Nonetheless, it provides an important perspective to prospective practitioners, and a starting point for implementations on other platforms. Instructors may choose to emphasize certain numeric concepts because they highlight particular topics, such as the geometry of vector spaces. The Cauchy-Schwartz inequality is used in a variety of places as an optimizing principle. Recursive least square and least mean square adaptive filters are presented as natural outgrowths of more fundamental concepts: matrix inverse updates and steepest descent. Neural networks and blind source separation are also presented as applications of steepest descent. Several chapters are devoted to iterative and recursive methods. Though iterative methods are of great theoretical and practical significance, no other signal processing textbook provides a similar breadth of coverage. Methods presented include projection on convex sets, composite mapping, the EM algorithm, conjugate gradient, and methods of matrix inverse computation using iterative methods. Detection and estimation are presented with several applications, including spectrum estimation, phase estimation, and multidimensional digital communications. Optimization is a key concept in signal processing, and examples of optimization, both unconstrained and constrained, appear throughout the text. Both a theoretical justification for Lagrange multiplier methods and a phy

Rewards Program