Ali H. Sayed
Adaptive Filters (eBook, ePUB)
154,99 €
154,99 €
inkl. MwSt.
Sofort per Download lieferbar
154,99 €
Als Download kaufen
154,99 €
inkl. MwSt.
Sofort per Download lieferbar
Ali H. Sayed
Adaptive Filters (eBook, ePUB)
- Format: ePub
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei
bücher.de, um das eBook-Abo tolino select nutzen zu können.
Hier können Sie sich einloggen
Hier können Sie sich einloggen
Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
Adaptive filtering is a topic of immense practical and theoretical value, having applications in areas ranging from digital and wireless communications to biomedical systems. This book enables readers to gain a gradual and solid introduction to the subject, its applications to a variety of topical problems, existing limitations, and extensions of current theories. The book consists of eleven parts?each part containing a series of focused lectures and ending with bibliographic comments, problems, and computer projects with MATLAB solutions.
- Geräte: eReader
- mit Kopierschutz
- eBook Hilfe
- Größe: 36.74MB
Adaptive filtering is a topic of immense practical and theoretical value, having applications in areas ranging from digital and wireless communications to biomedical systems. This book enables readers to gain a gradual and solid introduction to the subject, its applications to a variety of topical problems, existing limitations, and extensions of current theories. The book consists of eleven parts?each part containing a series of focused lectures and ending with bibliographic comments, problems, and computer projects with MATLAB solutions.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: John Wiley & Sons
- Seitenzahl: 832
- Erscheinungstermin: 11. Oktober 2011
- Englisch
- ISBN-13: 9781118210840
- Artikelnr.: 37350872
- Verlag: John Wiley & Sons
- Seitenzahl: 832
- Erscheinungstermin: 11. Oktober 2011
- Englisch
- ISBN-13: 9781118210840
- Artikelnr.: 37350872
Ali H. Sayed is Professor of Electrical Engineering at UCLA, where he established and directs the Adaptive Systems Laboratory. He is a Fellow of the IEEE for his contributions to adaptive filtering and estimation algorithms. His research has attracted several recognitions including the 2003 Kuwait Prize, 2005 Terman Award, and several IEEE Best Paper Awards.
Preface and Acknowledgments.
Notation and Symbols.
BACKGROUND MATERIAL.
A. Random Variables.
A.1 Variance of a Random Variable.
A.2 Dependent Random Variables.
A.3 Complex-Valued Random Variables.
A.4 Vector-Valued Random Variables.
A.5 Gaussian Random Vectors.
B. Linear Algebra.
B.1 Hermitian and Positive-Definite Matrices.
B.2 Range Spaces and Nullspaces of Matrices.
B.3 Schur Complements.
B.4 Cholesky Factorization.
B.5 QR Decomposition.
B.6 Singular Value Decomposition.
B.7 Kronecker Products.
C. Complex Gradients.
C.1 Cauchy-Riemann Conditions.
C.2 Scalar Arguments.
C.3 Vector Arguments.
PART I: OPTIMAL ESTIMATION.
1. Scalar-Valued Data.
1.1 Estimation Without Observations.
1.2 Estimation Given Dependent Observations.
1.3 Orthogonality Principle.
1.4 Gaussian Random Variables.
2. Vector-Valued Data.
2.1 Optimal Estimator in the Vector Case.
2.2 Spherically Invariant Gaussian Variables.
2.3 Equivalent Optimization Criterion.
Summary and Notes.
Problems and Computer Projects.
PART II: LINEAR ESTIMATION.
3. Normal Equations.
3.1 Mean-Square Error Criterion.
3.2 Minimization by Differentiation.
3.3 Minimization by Completion-of-Squares.
3.4 Minimization of the Error Covariance Matrix.
3.5 Optimal Linear Estimator.
4. Orthogonality Principle.
4.1 Design Examples.
4.2 Orthogonality Condition.
4.3 Existence of Solutions.
4.4 Nonzero-Mean Variables.
5. Linear Models.
5.1 Estimation using Linear Relations.
5.2 Application: Channel Estimation.
5.3 Application: Block Data Estimation.
5.4 Application: Linear Channel Equalization.
5.5 Application: Multiple-Antenna Receivers.
6. Constrained Estimation.
6.1 Minimum-Variance Unbiased Estimation.
6.2 Example: Mean Estimation.
6.3 Application: Channel and Noise Estimation.
6.4 Application: Decision Feedback Equalization.
6.5 Application: Antenna Beamforming.
7. Kalman Filter.
7.1 Innovations Process.
7.2 State-Space Model.
7.3 Recursion for the State Estimator.
7.4 Computing the Gain Matrix.
7.5 Riccati Recursion.
7.6 Covariance Form.
7.7 Measurement and Time-Update Form.
Summary and Notes.
Problems and Computer Projects.
PART III: STOCHASTIC GRADIENT ALGORITHMS.
8. Steepest-Descent Technique.
8.1 Linear Estimation Problem.
8.2 Steepest-Descent Method.
8.3 More General Cost Functions.
9. Transient Behavior.
9.1 Modes of Convergence.
9.2 Optimal Step-Size.
9.3 Weight-Error Vector Convergence.
9.4 Time Constants.
9.5 Learning Curve.
9.6 Contour Curves of the Error Surface.
9.7 Iteration-Dependent Step-Sizes.
9.8 Newton?s Method.
10. LMS Algorithm.
10.1 Motivation.
10.2 Instantaneous Approximation.
10.3 Computational Cost.
10.4 Least-Perturbation Property.
10.5 Application: Adaptive Channel Estimation.
10.6 Application: Adaptive Channel Equalization.
10.7 Application: Decision-Feedback Equalization.
10.8 Ensemble-Average Learning Curves.
11. Normalized LMS Algorithm.
11.1 Instantaneous Approximation.
11.2 Computational Cost.
11.3 Power Normalization.
11.4 Least-Perturbation Property.
12. Other LMS-Type Algorithms.
12.1 Non-Blind Algorithms.
12.2 Blind Algorithms.
12.3 Some Properties.
13. Affine Projection Algorithm.
13.1 Instantaneous Approximation.
13.2 Computational Cost.
13.3 Least-Perturbation Property.
13.4 Affine Projection Interpretation.
14. RLS Algorithm.
14.1 Instantaneous Approximation.
14.2 Computational Cost.
Summary and Notes.
Problems and Computer Projects.
PART IV: MEAN-SQUARE PERFORMANCE.
15. Energy Conservation.
15.1 Performance Measure.
15.2 Stationary Data Model.
15.3 Energy Conservation Relation.
15.4 Variance Relation.
15.A Interpretations of the Energy Relation.
16. Performance of LMS.
16.1 Variance Relation.
16.2 Small Step-Sizes.
16.3 Separation Principle.
16.4 White Gaussian Input.
16.5 Statement of Results.
16.6 Simulation Results.
17. Performance of NLMS.
17.1 Separation Principle.
17.2 Simulation Results.
17.A Relating NLMS to LMS.
18. Performance of Sign-Error LMS.
18.1 Real-Valued Data.
18.2 Complex-Valued Data.
18.3 Simulation Results.
19. Performance of RLS and Other Filters.
19.1 Performance of RLS.
19.2 Performance of Other Filters.
19.3 Performance Table for Small Step-Sizes.
20. Nonstationary Environments.
20.1 Motivation.
20.2 Nonstationary Data Model.
20.3 Energy Conservation Relation.
20.4 Variance Relation.
21. Tracking Performance.
21.1 Performance of LMS.
21.2 Performance of NLMS.
21.3 Performance of Sign-Error LMS.
21.4 Performance of RLS.
21.5 Comparison of Tracking Performance.
21.6 Comparing RLS and LMS.
21.7 Performance of Other Filters.
21.8 Performance Table for Small Step-Sizes.
Summary and Notes.
Problems and Computer Projects.
PART V: TRANSIENT PERFORMANCE.
22. Weighted Energy Conservation.
22.1 Data Model.
22.2 Data-Normalized Adaptive Filters.
22.3 Weighted Energy Conservation Relation.
22.4 Weighted Variance Relation.
23. LMS with Gaussian Regressors.
23.1 Mean and Variance Relations.
23.2 Mean Behavior.
23.3 Mean-Square Behavior.
23.4 Mean-Square Stability.
23.5 Steady-State Performance.
23.6 Small Step-Size Approximations.
23.A Convergence Time.
24. LMS with non-Gaussian Regressors.
24.1 Mean and Variance Relations.
24.2 Mean-Square Stability and Performance.
24.3 Small Step-Size Approximations.
24.A Independence and Averaging Analysis.
25. Data-Normalized Filters.
25.1 NLMS Filter.
25.2 Data-Normalized Filters.
25.A Stability Bound.
25.B Stability of NLMS.
Summary and Notes.
Problems and Computer Projects.
PART VI: BLOCK ADAPTIVE FILTERS.
26. Transform Domain Adaptive Filters.
26.1 Transform-Domain Filters.
26.2 DFT-Domain LMS.
26.3 DCT-Domain LMS.
26.A DCT-Transformed Regressors.
27. Efficient Block Convolution.
27.1 Motivation.
27.2 Block Data Formulation.
27.3 Block Convolution.
28. Block and Subband Adaptive Filters.
28.1 DFT Block Adaptive Filters.
28.2 Subband Adaptive Filters.
28.A Another Constrained DFT Block Filter.
28.B Overlap-Add Block Adaptive Filters.
Summary and Notes.
Problems and Computer Projects.
PART VII: LEAST-SQUARES METHODS.
29. Least-Squares Criterion.
29.1 Least-Squares Problem.
29.2 Geometric Argument.
29.3 Algebraic Arguments.
29.4 Properties of Least-Squares Solution.
29.5 Projection Matrices.
29.6 Weighted Least-Squares.
29.7 Regularized Least-Squares.
29.8 Weighted Regularized Least-Squares.
30. Recursive Least-Squares.
30.1 Motivation.
30.2 RLS Algorithm.
30.3 Regularization.
30.4 Conversion Factor.
30.5 Time-Update of the Minimum Cost.
30.6 Exponentially-Weighted RLS Algorithm.
31. Kalman Filtering and RLS.
31.1 Equivalence in Linear Estimation.
31.2 Kalman Filtering and Recursive Least-Squares.
31.A Extended RLS Algorithms.
32. Order and Time-Update Relations.
32.1 Backward Order-Update Relations.
32.2 Forward Order-Update Relations.
32.3 Time-Update Relation.
Summary and Notes.
Problems and Computer Projects.
PART VIII: ARRAY ALGORITHMS.
33. Norm and Angle Preservation.
33.1 Some Difficulties.
33.2 Square-Root Factors.
33.3 Norm and Angle Preservation.
33.4 Motivation for Array Methods.
34. Unitary Transformations.
34.1 Givens Rotations.
34.2 Householder Transformations.
35. QR and Inverse QR Algorithms.
35.1 Inverse QR Algorithm.
35.2 QR Algorithm.
35.3 Extended QR Algorithm.
35.A Array Algorithms for Kalman Filtering.
Summary and Notes.
Problems and Computer Projects.
PART IX: FAST RLS ALGORITHMS.
36. Hyperbolic Rotations.
36.1 Hyperbolic Givens Rotations.
36.2 Hyperbolic Householder Transformations.
36.3 Hyperbolic Basis Rotations.
37. Fast Array Algorithm.
37.1 Time-Update of the Gain Vector.
37.2 Time-Update of the Conversion Factor.
37.3 Initial Conditions.
37.4 Array Algorithm.
37.A Chandrasekhar Filter.
38. Regularized Prediction Problems.
38.1 Regularized Backward Prediction.
38.2 Regularized Forward Prediction.
38.3 Low-Rank Factorization.
39. Fast Fixed-Order Filters.
39.1 Fast Transversal Filter.
39.2 FAEST Filter.
39.3 Fast Kalman Filter.
39.4 Stability Issues.
Summary and Notes.
Problems and Computer Projects.
PART X: LATTICE FILTERS.
40. Three Basic Estimation Problems.
40.1 Motivation for Lattice Filters.
40.2 Joint Process Estimation.
40.3 Backward Estimation Problem.
40.4 Forward Estimation Problem.
40.5 Time and Order-Update Relations.
41. Lattice Filter Algorithms.
41.1 Significance of Data Structure.
41.2 A Posteriori-Based Lattice Filter.
41.3 A Priori-Based Lattice Filter.
42. Error-Feedback Lattice Filters.
42.1 A Priori Error-Feedback Lattice Filter.
42.2 A Posteriori Error-Feedback Lattice Filter.
42.3 Normalized Lattice Filter.
43. Array Lattice Filters.
43.1 Order-Update of Output Estimation Errors.
43.2 Order-Update of Backward Estimation Errors.
43.3 Order-Update of Forward Estimation Errors.
43.4 Significance of Data Structure.
Summary and Notes.
Problems and Computer Projects.
PART XI: ROBUST FILTERS.
44. Indefinite Least-Squares.
44.1 Indefinite Least-Squares.
44.2 Recursive Minimization Algorithm.
44.3 Time-Update of the Minimum Cost.
44.4 Singular Weighting Matrices.
44.A Stationary Points.
44.B Inertia Conditions.
45. Robust Adaptive Filters.
45.1 A Posteriori-Based Robust Filters.
45.2 ε-NLMS Algorithm.
45.3 A Priori-Based Robust Filters.
45.4 LMS Algorithm.
45.A H1 Filters.
46. Robustness Properties.
46.1 Robustness of LMS.
46.2 Robustness of εNLMS.
46.3 Robustness of RLS.
Summary and Notes.
Problems and Computer Projects.
REFERENCES AND INDICES.
References.
Author Index.
Subject Index.
Notation and Symbols.
BACKGROUND MATERIAL.
A. Random Variables.
A.1 Variance of a Random Variable.
A.2 Dependent Random Variables.
A.3 Complex-Valued Random Variables.
A.4 Vector-Valued Random Variables.
A.5 Gaussian Random Vectors.
B. Linear Algebra.
B.1 Hermitian and Positive-Definite Matrices.
B.2 Range Spaces and Nullspaces of Matrices.
B.3 Schur Complements.
B.4 Cholesky Factorization.
B.5 QR Decomposition.
B.6 Singular Value Decomposition.
B.7 Kronecker Products.
C. Complex Gradients.
C.1 Cauchy-Riemann Conditions.
C.2 Scalar Arguments.
C.3 Vector Arguments.
PART I: OPTIMAL ESTIMATION.
1. Scalar-Valued Data.
1.1 Estimation Without Observations.
1.2 Estimation Given Dependent Observations.
1.3 Orthogonality Principle.
1.4 Gaussian Random Variables.
2. Vector-Valued Data.
2.1 Optimal Estimator in the Vector Case.
2.2 Spherically Invariant Gaussian Variables.
2.3 Equivalent Optimization Criterion.
Summary and Notes.
Problems and Computer Projects.
PART II: LINEAR ESTIMATION.
3. Normal Equations.
3.1 Mean-Square Error Criterion.
3.2 Minimization by Differentiation.
3.3 Minimization by Completion-of-Squares.
3.4 Minimization of the Error Covariance Matrix.
3.5 Optimal Linear Estimator.
4. Orthogonality Principle.
4.1 Design Examples.
4.2 Orthogonality Condition.
4.3 Existence of Solutions.
4.4 Nonzero-Mean Variables.
5. Linear Models.
5.1 Estimation using Linear Relations.
5.2 Application: Channel Estimation.
5.3 Application: Block Data Estimation.
5.4 Application: Linear Channel Equalization.
5.5 Application: Multiple-Antenna Receivers.
6. Constrained Estimation.
6.1 Minimum-Variance Unbiased Estimation.
6.2 Example: Mean Estimation.
6.3 Application: Channel and Noise Estimation.
6.4 Application: Decision Feedback Equalization.
6.5 Application: Antenna Beamforming.
7. Kalman Filter.
7.1 Innovations Process.
7.2 State-Space Model.
7.3 Recursion for the State Estimator.
7.4 Computing the Gain Matrix.
7.5 Riccati Recursion.
7.6 Covariance Form.
7.7 Measurement and Time-Update Form.
Summary and Notes.
Problems and Computer Projects.
PART III: STOCHASTIC GRADIENT ALGORITHMS.
8. Steepest-Descent Technique.
8.1 Linear Estimation Problem.
8.2 Steepest-Descent Method.
8.3 More General Cost Functions.
9. Transient Behavior.
9.1 Modes of Convergence.
9.2 Optimal Step-Size.
9.3 Weight-Error Vector Convergence.
9.4 Time Constants.
9.5 Learning Curve.
9.6 Contour Curves of the Error Surface.
9.7 Iteration-Dependent Step-Sizes.
9.8 Newton?s Method.
10. LMS Algorithm.
10.1 Motivation.
10.2 Instantaneous Approximation.
10.3 Computational Cost.
10.4 Least-Perturbation Property.
10.5 Application: Adaptive Channel Estimation.
10.6 Application: Adaptive Channel Equalization.
10.7 Application: Decision-Feedback Equalization.
10.8 Ensemble-Average Learning Curves.
11. Normalized LMS Algorithm.
11.1 Instantaneous Approximation.
11.2 Computational Cost.
11.3 Power Normalization.
11.4 Least-Perturbation Property.
12. Other LMS-Type Algorithms.
12.1 Non-Blind Algorithms.
12.2 Blind Algorithms.
12.3 Some Properties.
13. Affine Projection Algorithm.
13.1 Instantaneous Approximation.
13.2 Computational Cost.
13.3 Least-Perturbation Property.
13.4 Affine Projection Interpretation.
14. RLS Algorithm.
14.1 Instantaneous Approximation.
14.2 Computational Cost.
Summary and Notes.
Problems and Computer Projects.
PART IV: MEAN-SQUARE PERFORMANCE.
15. Energy Conservation.
15.1 Performance Measure.
15.2 Stationary Data Model.
15.3 Energy Conservation Relation.
15.4 Variance Relation.
15.A Interpretations of the Energy Relation.
16. Performance of LMS.
16.1 Variance Relation.
16.2 Small Step-Sizes.
16.3 Separation Principle.
16.4 White Gaussian Input.
16.5 Statement of Results.
16.6 Simulation Results.
17. Performance of NLMS.
17.1 Separation Principle.
17.2 Simulation Results.
17.A Relating NLMS to LMS.
18. Performance of Sign-Error LMS.
18.1 Real-Valued Data.
18.2 Complex-Valued Data.
18.3 Simulation Results.
19. Performance of RLS and Other Filters.
19.1 Performance of RLS.
19.2 Performance of Other Filters.
19.3 Performance Table for Small Step-Sizes.
20. Nonstationary Environments.
20.1 Motivation.
20.2 Nonstationary Data Model.
20.3 Energy Conservation Relation.
20.4 Variance Relation.
21. Tracking Performance.
21.1 Performance of LMS.
21.2 Performance of NLMS.
21.3 Performance of Sign-Error LMS.
21.4 Performance of RLS.
21.5 Comparison of Tracking Performance.
21.6 Comparing RLS and LMS.
21.7 Performance of Other Filters.
21.8 Performance Table for Small Step-Sizes.
Summary and Notes.
Problems and Computer Projects.
PART V: TRANSIENT PERFORMANCE.
22. Weighted Energy Conservation.
22.1 Data Model.
22.2 Data-Normalized Adaptive Filters.
22.3 Weighted Energy Conservation Relation.
22.4 Weighted Variance Relation.
23. LMS with Gaussian Regressors.
23.1 Mean and Variance Relations.
23.2 Mean Behavior.
23.3 Mean-Square Behavior.
23.4 Mean-Square Stability.
23.5 Steady-State Performance.
23.6 Small Step-Size Approximations.
23.A Convergence Time.
24. LMS with non-Gaussian Regressors.
24.1 Mean and Variance Relations.
24.2 Mean-Square Stability and Performance.
24.3 Small Step-Size Approximations.
24.A Independence and Averaging Analysis.
25. Data-Normalized Filters.
25.1 NLMS Filter.
25.2 Data-Normalized Filters.
25.A Stability Bound.
25.B Stability of NLMS.
Summary and Notes.
Problems and Computer Projects.
PART VI: BLOCK ADAPTIVE FILTERS.
26. Transform Domain Adaptive Filters.
26.1 Transform-Domain Filters.
26.2 DFT-Domain LMS.
26.3 DCT-Domain LMS.
26.A DCT-Transformed Regressors.
27. Efficient Block Convolution.
27.1 Motivation.
27.2 Block Data Formulation.
27.3 Block Convolution.
28. Block and Subband Adaptive Filters.
28.1 DFT Block Adaptive Filters.
28.2 Subband Adaptive Filters.
28.A Another Constrained DFT Block Filter.
28.B Overlap-Add Block Adaptive Filters.
Summary and Notes.
Problems and Computer Projects.
PART VII: LEAST-SQUARES METHODS.
29. Least-Squares Criterion.
29.1 Least-Squares Problem.
29.2 Geometric Argument.
29.3 Algebraic Arguments.
29.4 Properties of Least-Squares Solution.
29.5 Projection Matrices.
29.6 Weighted Least-Squares.
29.7 Regularized Least-Squares.
29.8 Weighted Regularized Least-Squares.
30. Recursive Least-Squares.
30.1 Motivation.
30.2 RLS Algorithm.
30.3 Regularization.
30.4 Conversion Factor.
30.5 Time-Update of the Minimum Cost.
30.6 Exponentially-Weighted RLS Algorithm.
31. Kalman Filtering and RLS.
31.1 Equivalence in Linear Estimation.
31.2 Kalman Filtering and Recursive Least-Squares.
31.A Extended RLS Algorithms.
32. Order and Time-Update Relations.
32.1 Backward Order-Update Relations.
32.2 Forward Order-Update Relations.
32.3 Time-Update Relation.
Summary and Notes.
Problems and Computer Projects.
PART VIII: ARRAY ALGORITHMS.
33. Norm and Angle Preservation.
33.1 Some Difficulties.
33.2 Square-Root Factors.
33.3 Norm and Angle Preservation.
33.4 Motivation for Array Methods.
34. Unitary Transformations.
34.1 Givens Rotations.
34.2 Householder Transformations.
35. QR and Inverse QR Algorithms.
35.1 Inverse QR Algorithm.
35.2 QR Algorithm.
35.3 Extended QR Algorithm.
35.A Array Algorithms for Kalman Filtering.
Summary and Notes.
Problems and Computer Projects.
PART IX: FAST RLS ALGORITHMS.
36. Hyperbolic Rotations.
36.1 Hyperbolic Givens Rotations.
36.2 Hyperbolic Householder Transformations.
36.3 Hyperbolic Basis Rotations.
37. Fast Array Algorithm.
37.1 Time-Update of the Gain Vector.
37.2 Time-Update of the Conversion Factor.
37.3 Initial Conditions.
37.4 Array Algorithm.
37.A Chandrasekhar Filter.
38. Regularized Prediction Problems.
38.1 Regularized Backward Prediction.
38.2 Regularized Forward Prediction.
38.3 Low-Rank Factorization.
39. Fast Fixed-Order Filters.
39.1 Fast Transversal Filter.
39.2 FAEST Filter.
39.3 Fast Kalman Filter.
39.4 Stability Issues.
Summary and Notes.
Problems and Computer Projects.
PART X: LATTICE FILTERS.
40. Three Basic Estimation Problems.
40.1 Motivation for Lattice Filters.
40.2 Joint Process Estimation.
40.3 Backward Estimation Problem.
40.4 Forward Estimation Problem.
40.5 Time and Order-Update Relations.
41. Lattice Filter Algorithms.
41.1 Significance of Data Structure.
41.2 A Posteriori-Based Lattice Filter.
41.3 A Priori-Based Lattice Filter.
42. Error-Feedback Lattice Filters.
42.1 A Priori Error-Feedback Lattice Filter.
42.2 A Posteriori Error-Feedback Lattice Filter.
42.3 Normalized Lattice Filter.
43. Array Lattice Filters.
43.1 Order-Update of Output Estimation Errors.
43.2 Order-Update of Backward Estimation Errors.
43.3 Order-Update of Forward Estimation Errors.
43.4 Significance of Data Structure.
Summary and Notes.
Problems and Computer Projects.
PART XI: ROBUST FILTERS.
44. Indefinite Least-Squares.
44.1 Indefinite Least-Squares.
44.2 Recursive Minimization Algorithm.
44.3 Time-Update of the Minimum Cost.
44.4 Singular Weighting Matrices.
44.A Stationary Points.
44.B Inertia Conditions.
45. Robust Adaptive Filters.
45.1 A Posteriori-Based Robust Filters.
45.2 ε-NLMS Algorithm.
45.3 A Priori-Based Robust Filters.
45.4 LMS Algorithm.
45.A H1 Filters.
46. Robustness Properties.
46.1 Robustness of LMS.
46.2 Robustness of εNLMS.
46.3 Robustness of RLS.
Summary and Notes.
Problems and Computer Projects.
REFERENCES AND INDICES.
References.
Author Index.
Subject Index.
Preface and Acknowledgments.
Notation and Symbols.
BACKGROUND MATERIAL.
A. Random Variables.
A.1 Variance of a Random Variable.
A.2 Dependent Random Variables.
A.3 Complex-Valued Random Variables.
A.4 Vector-Valued Random Variables.
A.5 Gaussian Random Vectors.
B. Linear Algebra.
B.1 Hermitian and Positive-Definite Matrices.
B.2 Range Spaces and Nullspaces of Matrices.
B.3 Schur Complements.
B.4 Cholesky Factorization.
B.5 QR Decomposition.
B.6 Singular Value Decomposition.
B.7 Kronecker Products.
C. Complex Gradients.
C.1 Cauchy-Riemann Conditions.
C.2 Scalar Arguments.
C.3 Vector Arguments.
PART I: OPTIMAL ESTIMATION.
1. Scalar-Valued Data.
1.1 Estimation Without Observations.
1.2 Estimation Given Dependent Observations.
1.3 Orthogonality Principle.
1.4 Gaussian Random Variables.
2. Vector-Valued Data.
2.1 Optimal Estimator in the Vector Case.
2.2 Spherically Invariant Gaussian Variables.
2.3 Equivalent Optimization Criterion.
Summary and Notes.
Problems and Computer Projects.
PART II: LINEAR ESTIMATION.
3. Normal Equations.
3.1 Mean-Square Error Criterion.
3.2 Minimization by Differentiation.
3.3 Minimization by Completion-of-Squares.
3.4 Minimization of the Error Covariance Matrix.
3.5 Optimal Linear Estimator.
4. Orthogonality Principle.
4.1 Design Examples.
4.2 Orthogonality Condition.
4.3 Existence of Solutions.
4.4 Nonzero-Mean Variables.
5. Linear Models.
5.1 Estimation using Linear Relations.
5.2 Application: Channel Estimation.
5.3 Application: Block Data Estimation.
5.4 Application: Linear Channel Equalization.
5.5 Application: Multiple-Antenna Receivers.
6. Constrained Estimation.
6.1 Minimum-Variance Unbiased Estimation.
6.2 Example: Mean Estimation.
6.3 Application: Channel and Noise Estimation.
6.4 Application: Decision Feedback Equalization.
6.5 Application: Antenna Beamforming.
7. Kalman Filter.
7.1 Innovations Process.
7.2 State-Space Model.
7.3 Recursion for the State Estimator.
7.4 Computing the Gain Matrix.
7.5 Riccati Recursion.
7.6 Covariance Form.
7.7 Measurement and Time-Update Form.
Summary and Notes.
Problems and Computer Projects.
PART III: STOCHASTIC GRADIENT ALGORITHMS.
8. Steepest-Descent Technique.
8.1 Linear Estimation Problem.
8.2 Steepest-Descent Method.
8.3 More General Cost Functions.
9. Transient Behavior.
9.1 Modes of Convergence.
9.2 Optimal Step-Size.
9.3 Weight-Error Vector Convergence.
9.4 Time Constants.
9.5 Learning Curve.
9.6 Contour Curves of the Error Surface.
9.7 Iteration-Dependent Step-Sizes.
9.8 Newton?s Method.
10. LMS Algorithm.
10.1 Motivation.
10.2 Instantaneous Approximation.
10.3 Computational Cost.
10.4 Least-Perturbation Property.
10.5 Application: Adaptive Channel Estimation.
10.6 Application: Adaptive Channel Equalization.
10.7 Application: Decision-Feedback Equalization.
10.8 Ensemble-Average Learning Curves.
11. Normalized LMS Algorithm.
11.1 Instantaneous Approximation.
11.2 Computational Cost.
11.3 Power Normalization.
11.4 Least-Perturbation Property.
12. Other LMS-Type Algorithms.
12.1 Non-Blind Algorithms.
12.2 Blind Algorithms.
12.3 Some Properties.
13. Affine Projection Algorithm.
13.1 Instantaneous Approximation.
13.2 Computational Cost.
13.3 Least-Perturbation Property.
13.4 Affine Projection Interpretation.
14. RLS Algorithm.
14.1 Instantaneous Approximation.
14.2 Computational Cost.
Summary and Notes.
Problems and Computer Projects.
PART IV: MEAN-SQUARE PERFORMANCE.
15. Energy Conservation.
15.1 Performance Measure.
15.2 Stationary Data Model.
15.3 Energy Conservation Relation.
15.4 Variance Relation.
15.A Interpretations of the Energy Relation.
16. Performance of LMS.
16.1 Variance Relation.
16.2 Small Step-Sizes.
16.3 Separation Principle.
16.4 White Gaussian Input.
16.5 Statement of Results.
16.6 Simulation Results.
17. Performance of NLMS.
17.1 Separation Principle.
17.2 Simulation Results.
17.A Relating NLMS to LMS.
18. Performance of Sign-Error LMS.
18.1 Real-Valued Data.
18.2 Complex-Valued Data.
18.3 Simulation Results.
19. Performance of RLS and Other Filters.
19.1 Performance of RLS.
19.2 Performance of Other Filters.
19.3 Performance Table for Small Step-Sizes.
20. Nonstationary Environments.
20.1 Motivation.
20.2 Nonstationary Data Model.
20.3 Energy Conservation Relation.
20.4 Variance Relation.
21. Tracking Performance.
21.1 Performance of LMS.
21.2 Performance of NLMS.
21.3 Performance of Sign-Error LMS.
21.4 Performance of RLS.
21.5 Comparison of Tracking Performance.
21.6 Comparing RLS and LMS.
21.7 Performance of Other Filters.
21.8 Performance Table for Small Step-Sizes.
Summary and Notes.
Problems and Computer Projects.
PART V: TRANSIENT PERFORMANCE.
22. Weighted Energy Conservation.
22.1 Data Model.
22.2 Data-Normalized Adaptive Filters.
22.3 Weighted Energy Conservation Relation.
22.4 Weighted Variance Relation.
23. LMS with Gaussian Regressors.
23.1 Mean and Variance Relations.
23.2 Mean Behavior.
23.3 Mean-Square Behavior.
23.4 Mean-Square Stability.
23.5 Steady-State Performance.
23.6 Small Step-Size Approximations.
23.A Convergence Time.
24. LMS with non-Gaussian Regressors.
24.1 Mean and Variance Relations.
24.2 Mean-Square Stability and Performance.
24.3 Small Step-Size Approximations.
24.A Independence and Averaging Analysis.
25. Data-Normalized Filters.
25.1 NLMS Filter.
25.2 Data-Normalized Filters.
25.A Stability Bound.
25.B Stability of NLMS.
Summary and Notes.
Problems and Computer Projects.
PART VI: BLOCK ADAPTIVE FILTERS.
26. Transform Domain Adaptive Filters.
26.1 Transform-Domain Filters.
26.2 DFT-Domain LMS.
26.3 DCT-Domain LMS.
26.A DCT-Transformed Regressors.
27. Efficient Block Convolution.
27.1 Motivation.
27.2 Block Data Formulation.
27.3 Block Convolution.
28. Block and Subband Adaptive Filters.
28.1 DFT Block Adaptive Filters.
28.2 Subband Adaptive Filters.
28.A Another Constrained DFT Block Filter.
28.B Overlap-Add Block Adaptive Filters.
Summary and Notes.
Problems and Computer Projects.
PART VII: LEAST-SQUARES METHODS.
29. Least-Squares Criterion.
29.1 Least-Squares Problem.
29.2 Geometric Argument.
29.3 Algebraic Arguments.
29.4 Properties of Least-Squares Solution.
29.5 Projection Matrices.
29.6 Weighted Least-Squares.
29.7 Regularized Least-Squares.
29.8 Weighted Regularized Least-Squares.
30. Recursive Least-Squares.
30.1 Motivation.
30.2 RLS Algorithm.
30.3 Regularization.
30.4 Conversion Factor.
30.5 Time-Update of the Minimum Cost.
30.6 Exponentially-Weighted RLS Algorithm.
31. Kalman Filtering and RLS.
31.1 Equivalence in Linear Estimation.
31.2 Kalman Filtering and Recursive Least-Squares.
31.A Extended RLS Algorithms.
32. Order and Time-Update Relations.
32.1 Backward Order-Update Relations.
32.2 Forward Order-Update Relations.
32.3 Time-Update Relation.
Summary and Notes.
Problems and Computer Projects.
PART VIII: ARRAY ALGORITHMS.
33. Norm and Angle Preservation.
33.1 Some Difficulties.
33.2 Square-Root Factors.
33.3 Norm and Angle Preservation.
33.4 Motivation for Array Methods.
34. Unitary Transformations.
34.1 Givens Rotations.
34.2 Householder Transformations.
35. QR and Inverse QR Algorithms.
35.1 Inverse QR Algorithm.
35.2 QR Algorithm.
35.3 Extended QR Algorithm.
35.A Array Algorithms for Kalman Filtering.
Summary and Notes.
Problems and Computer Projects.
PART IX: FAST RLS ALGORITHMS.
36. Hyperbolic Rotations.
36.1 Hyperbolic Givens Rotations.
36.2 Hyperbolic Householder Transformations.
36.3 Hyperbolic Basis Rotations.
37. Fast Array Algorithm.
37.1 Time-Update of the Gain Vector.
37.2 Time-Update of the Conversion Factor.
37.3 Initial Conditions.
37.4 Array Algorithm.
37.A Chandrasekhar Filter.
38. Regularized Prediction Problems.
38.1 Regularized Backward Prediction.
38.2 Regularized Forward Prediction.
38.3 Low-Rank Factorization.
39. Fast Fixed-Order Filters.
39.1 Fast Transversal Filter.
39.2 FAEST Filter.
39.3 Fast Kalman Filter.
39.4 Stability Issues.
Summary and Notes.
Problems and Computer Projects.
PART X: LATTICE FILTERS.
40. Three Basic Estimation Problems.
40.1 Motivation for Lattice Filters.
40.2 Joint Process Estimation.
40.3 Backward Estimation Problem.
40.4 Forward Estimation Problem.
40.5 Time and Order-Update Relations.
41. Lattice Filter Algorithms.
41.1 Significance of Data Structure.
41.2 A Posteriori-Based Lattice Filter.
41.3 A Priori-Based Lattice Filter.
42. Error-Feedback Lattice Filters.
42.1 A Priori Error-Feedback Lattice Filter.
42.2 A Posteriori Error-Feedback Lattice Filter.
42.3 Normalized Lattice Filter.
43. Array Lattice Filters.
43.1 Order-Update of Output Estimation Errors.
43.2 Order-Update of Backward Estimation Errors.
43.3 Order-Update of Forward Estimation Errors.
43.4 Significance of Data Structure.
Summary and Notes.
Problems and Computer Projects.
PART XI: ROBUST FILTERS.
44. Indefinite Least-Squares.
44.1 Indefinite Least-Squares.
44.2 Recursive Minimization Algorithm.
44.3 Time-Update of the Minimum Cost.
44.4 Singular Weighting Matrices.
44.A Stationary Points.
44.B Inertia Conditions.
45. Robust Adaptive Filters.
45.1 A Posteriori-Based Robust Filters.
45.2 ε-NLMS Algorithm.
45.3 A Priori-Based Robust Filters.
45.4 LMS Algorithm.
45.A H1 Filters.
46. Robustness Properties.
46.1 Robustness of LMS.
46.2 Robustness of εNLMS.
46.3 Robustness of RLS.
Summary and Notes.
Problems and Computer Projects.
REFERENCES AND INDICES.
References.
Author Index.
Subject Index.
Notation and Symbols.
BACKGROUND MATERIAL.
A. Random Variables.
A.1 Variance of a Random Variable.
A.2 Dependent Random Variables.
A.3 Complex-Valued Random Variables.
A.4 Vector-Valued Random Variables.
A.5 Gaussian Random Vectors.
B. Linear Algebra.
B.1 Hermitian and Positive-Definite Matrices.
B.2 Range Spaces and Nullspaces of Matrices.
B.3 Schur Complements.
B.4 Cholesky Factorization.
B.5 QR Decomposition.
B.6 Singular Value Decomposition.
B.7 Kronecker Products.
C. Complex Gradients.
C.1 Cauchy-Riemann Conditions.
C.2 Scalar Arguments.
C.3 Vector Arguments.
PART I: OPTIMAL ESTIMATION.
1. Scalar-Valued Data.
1.1 Estimation Without Observations.
1.2 Estimation Given Dependent Observations.
1.3 Orthogonality Principle.
1.4 Gaussian Random Variables.
2. Vector-Valued Data.
2.1 Optimal Estimator in the Vector Case.
2.2 Spherically Invariant Gaussian Variables.
2.3 Equivalent Optimization Criterion.
Summary and Notes.
Problems and Computer Projects.
PART II: LINEAR ESTIMATION.
3. Normal Equations.
3.1 Mean-Square Error Criterion.
3.2 Minimization by Differentiation.
3.3 Minimization by Completion-of-Squares.
3.4 Minimization of the Error Covariance Matrix.
3.5 Optimal Linear Estimator.
4. Orthogonality Principle.
4.1 Design Examples.
4.2 Orthogonality Condition.
4.3 Existence of Solutions.
4.4 Nonzero-Mean Variables.
5. Linear Models.
5.1 Estimation using Linear Relations.
5.2 Application: Channel Estimation.
5.3 Application: Block Data Estimation.
5.4 Application: Linear Channel Equalization.
5.5 Application: Multiple-Antenna Receivers.
6. Constrained Estimation.
6.1 Minimum-Variance Unbiased Estimation.
6.2 Example: Mean Estimation.
6.3 Application: Channel and Noise Estimation.
6.4 Application: Decision Feedback Equalization.
6.5 Application: Antenna Beamforming.
7. Kalman Filter.
7.1 Innovations Process.
7.2 State-Space Model.
7.3 Recursion for the State Estimator.
7.4 Computing the Gain Matrix.
7.5 Riccati Recursion.
7.6 Covariance Form.
7.7 Measurement and Time-Update Form.
Summary and Notes.
Problems and Computer Projects.
PART III: STOCHASTIC GRADIENT ALGORITHMS.
8. Steepest-Descent Technique.
8.1 Linear Estimation Problem.
8.2 Steepest-Descent Method.
8.3 More General Cost Functions.
9. Transient Behavior.
9.1 Modes of Convergence.
9.2 Optimal Step-Size.
9.3 Weight-Error Vector Convergence.
9.4 Time Constants.
9.5 Learning Curve.
9.6 Contour Curves of the Error Surface.
9.7 Iteration-Dependent Step-Sizes.
9.8 Newton?s Method.
10. LMS Algorithm.
10.1 Motivation.
10.2 Instantaneous Approximation.
10.3 Computational Cost.
10.4 Least-Perturbation Property.
10.5 Application: Adaptive Channel Estimation.
10.6 Application: Adaptive Channel Equalization.
10.7 Application: Decision-Feedback Equalization.
10.8 Ensemble-Average Learning Curves.
11. Normalized LMS Algorithm.
11.1 Instantaneous Approximation.
11.2 Computational Cost.
11.3 Power Normalization.
11.4 Least-Perturbation Property.
12. Other LMS-Type Algorithms.
12.1 Non-Blind Algorithms.
12.2 Blind Algorithms.
12.3 Some Properties.
13. Affine Projection Algorithm.
13.1 Instantaneous Approximation.
13.2 Computational Cost.
13.3 Least-Perturbation Property.
13.4 Affine Projection Interpretation.
14. RLS Algorithm.
14.1 Instantaneous Approximation.
14.2 Computational Cost.
Summary and Notes.
Problems and Computer Projects.
PART IV: MEAN-SQUARE PERFORMANCE.
15. Energy Conservation.
15.1 Performance Measure.
15.2 Stationary Data Model.
15.3 Energy Conservation Relation.
15.4 Variance Relation.
15.A Interpretations of the Energy Relation.
16. Performance of LMS.
16.1 Variance Relation.
16.2 Small Step-Sizes.
16.3 Separation Principle.
16.4 White Gaussian Input.
16.5 Statement of Results.
16.6 Simulation Results.
17. Performance of NLMS.
17.1 Separation Principle.
17.2 Simulation Results.
17.A Relating NLMS to LMS.
18. Performance of Sign-Error LMS.
18.1 Real-Valued Data.
18.2 Complex-Valued Data.
18.3 Simulation Results.
19. Performance of RLS and Other Filters.
19.1 Performance of RLS.
19.2 Performance of Other Filters.
19.3 Performance Table for Small Step-Sizes.
20. Nonstationary Environments.
20.1 Motivation.
20.2 Nonstationary Data Model.
20.3 Energy Conservation Relation.
20.4 Variance Relation.
21. Tracking Performance.
21.1 Performance of LMS.
21.2 Performance of NLMS.
21.3 Performance of Sign-Error LMS.
21.4 Performance of RLS.
21.5 Comparison of Tracking Performance.
21.6 Comparing RLS and LMS.
21.7 Performance of Other Filters.
21.8 Performance Table for Small Step-Sizes.
Summary and Notes.
Problems and Computer Projects.
PART V: TRANSIENT PERFORMANCE.
22. Weighted Energy Conservation.
22.1 Data Model.
22.2 Data-Normalized Adaptive Filters.
22.3 Weighted Energy Conservation Relation.
22.4 Weighted Variance Relation.
23. LMS with Gaussian Regressors.
23.1 Mean and Variance Relations.
23.2 Mean Behavior.
23.3 Mean-Square Behavior.
23.4 Mean-Square Stability.
23.5 Steady-State Performance.
23.6 Small Step-Size Approximations.
23.A Convergence Time.
24. LMS with non-Gaussian Regressors.
24.1 Mean and Variance Relations.
24.2 Mean-Square Stability and Performance.
24.3 Small Step-Size Approximations.
24.A Independence and Averaging Analysis.
25. Data-Normalized Filters.
25.1 NLMS Filter.
25.2 Data-Normalized Filters.
25.A Stability Bound.
25.B Stability of NLMS.
Summary and Notes.
Problems and Computer Projects.
PART VI: BLOCK ADAPTIVE FILTERS.
26. Transform Domain Adaptive Filters.
26.1 Transform-Domain Filters.
26.2 DFT-Domain LMS.
26.3 DCT-Domain LMS.
26.A DCT-Transformed Regressors.
27. Efficient Block Convolution.
27.1 Motivation.
27.2 Block Data Formulation.
27.3 Block Convolution.
28. Block and Subband Adaptive Filters.
28.1 DFT Block Adaptive Filters.
28.2 Subband Adaptive Filters.
28.A Another Constrained DFT Block Filter.
28.B Overlap-Add Block Adaptive Filters.
Summary and Notes.
Problems and Computer Projects.
PART VII: LEAST-SQUARES METHODS.
29. Least-Squares Criterion.
29.1 Least-Squares Problem.
29.2 Geometric Argument.
29.3 Algebraic Arguments.
29.4 Properties of Least-Squares Solution.
29.5 Projection Matrices.
29.6 Weighted Least-Squares.
29.7 Regularized Least-Squares.
29.8 Weighted Regularized Least-Squares.
30. Recursive Least-Squares.
30.1 Motivation.
30.2 RLS Algorithm.
30.3 Regularization.
30.4 Conversion Factor.
30.5 Time-Update of the Minimum Cost.
30.6 Exponentially-Weighted RLS Algorithm.
31. Kalman Filtering and RLS.
31.1 Equivalence in Linear Estimation.
31.2 Kalman Filtering and Recursive Least-Squares.
31.A Extended RLS Algorithms.
32. Order and Time-Update Relations.
32.1 Backward Order-Update Relations.
32.2 Forward Order-Update Relations.
32.3 Time-Update Relation.
Summary and Notes.
Problems and Computer Projects.
PART VIII: ARRAY ALGORITHMS.
33. Norm and Angle Preservation.
33.1 Some Difficulties.
33.2 Square-Root Factors.
33.3 Norm and Angle Preservation.
33.4 Motivation for Array Methods.
34. Unitary Transformations.
34.1 Givens Rotations.
34.2 Householder Transformations.
35. QR and Inverse QR Algorithms.
35.1 Inverse QR Algorithm.
35.2 QR Algorithm.
35.3 Extended QR Algorithm.
35.A Array Algorithms for Kalman Filtering.
Summary and Notes.
Problems and Computer Projects.
PART IX: FAST RLS ALGORITHMS.
36. Hyperbolic Rotations.
36.1 Hyperbolic Givens Rotations.
36.2 Hyperbolic Householder Transformations.
36.3 Hyperbolic Basis Rotations.
37. Fast Array Algorithm.
37.1 Time-Update of the Gain Vector.
37.2 Time-Update of the Conversion Factor.
37.3 Initial Conditions.
37.4 Array Algorithm.
37.A Chandrasekhar Filter.
38. Regularized Prediction Problems.
38.1 Regularized Backward Prediction.
38.2 Regularized Forward Prediction.
38.3 Low-Rank Factorization.
39. Fast Fixed-Order Filters.
39.1 Fast Transversal Filter.
39.2 FAEST Filter.
39.3 Fast Kalman Filter.
39.4 Stability Issues.
Summary and Notes.
Problems and Computer Projects.
PART X: LATTICE FILTERS.
40. Three Basic Estimation Problems.
40.1 Motivation for Lattice Filters.
40.2 Joint Process Estimation.
40.3 Backward Estimation Problem.
40.4 Forward Estimation Problem.
40.5 Time and Order-Update Relations.
41. Lattice Filter Algorithms.
41.1 Significance of Data Structure.
41.2 A Posteriori-Based Lattice Filter.
41.3 A Priori-Based Lattice Filter.
42. Error-Feedback Lattice Filters.
42.1 A Priori Error-Feedback Lattice Filter.
42.2 A Posteriori Error-Feedback Lattice Filter.
42.3 Normalized Lattice Filter.
43. Array Lattice Filters.
43.1 Order-Update of Output Estimation Errors.
43.2 Order-Update of Backward Estimation Errors.
43.3 Order-Update of Forward Estimation Errors.
43.4 Significance of Data Structure.
Summary and Notes.
Problems and Computer Projects.
PART XI: ROBUST FILTERS.
44. Indefinite Least-Squares.
44.1 Indefinite Least-Squares.
44.2 Recursive Minimization Algorithm.
44.3 Time-Update of the Minimum Cost.
44.4 Singular Weighting Matrices.
44.A Stationary Points.
44.B Inertia Conditions.
45. Robust Adaptive Filters.
45.1 A Posteriori-Based Robust Filters.
45.2 ε-NLMS Algorithm.
45.3 A Priori-Based Robust Filters.
45.4 LMS Algorithm.
45.A H1 Filters.
46. Robustness Properties.
46.1 Robustness of LMS.
46.2 Robustness of εNLMS.
46.3 Robustness of RLS.
Summary and Notes.
Problems and Computer Projects.
REFERENCES AND INDICES.
References.
Author Index.
Subject Index.