Lnear prediction theory and the related algorithms have matured to the point where they now form an integral part of many real-world adaptive systems. When it is necessary to extract information from a random process, we are frequently faced with the problem of analyzing and solving special systems of linear equations. In the general case these systems are overdetermined and may be characterized by additional properties, such as update and shift-invariance properties. Usually, one employs exact or approximate least-squares methods to solve the resulting class of linear equations. Mainly during…mehr
Lnear prediction theory and the related algorithms have matured to the point where they now form an integral part of many real-world adaptive systems. When it is necessary to extract information from a random process, we are frequently faced with the problem of analyzing and solving special systems of linear equations. In the general case these systems are overdetermined and may be characterized by additional properties, such as update and shift-invariance properties. Usually, one employs exact or approximate least-squares methods to solve the resulting class of linear equations. Mainly during the last decade, researchers in various fields have contributed techniques and nomenclature for this type of least-squares problem. This body of methods now constitutes what we call the theory of linear prediction. The immense interest that it has aroused clearly emerges from recent advances in processor technology, which provide the means to implement linear prediction algorithms, and to operate them in real time. The practical effect is the occurrence of a new class of high-performance adaptive systems for control, communications and system identification applications. This monograph presumes a background in discrete-time digital signal processing, including Z-transforms, and a basic knowledge of discrete-time random processes. One of the difficulties I have en countered while writing this book is that many engineers and computer scientists lack knowledge of fundamental mathematics and geometry.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
1. Introduction.- 2. The Linear Prediction Model.- 2.1 The Normal Equations of Linear Prediction.- 2.2 Geometrical Interpretation of the Normal Equations.- 2.3 Statistical Interpretation of the Normal Equations.- 2.4 The Problem of Signal Observation.- 2.5 Recursion Laws of the Normal Equations.- 2.6 Stationarity - A Special Case of Linear Prediction.- 2.7 Covariance Method and Autocorrelation Method.- 2.8 Recursive Windowing Algorithms.- 2.9 Backward Linear Prediction.- 2.10 Chapter Summary.- 3. Classical Algorithms for Symmetric Linear Systems.- 3.1 The Cholesky Decomposition.- 3.2 The QR Decomposition.- 3.3 Some More Principles for Matrix Computations.- 3.4 Chapter Summary.- 4. Recursive Least-Squares Using the QR Decomposition.- 4.1 Formulation of the Growing-Window Recursive Least-Squares Problem.- 4.2 Recursive Least Squares Based on the Givens Reduction.- 4.3 Systolic Array Implementation.- 4.4 Iterative Vector Rotations - The CORDIC Algorithm.- 4.5 Recursive QR Decomposition Using a Second-Order Window.- 4.6 Alternative Formulations of the QRLS Problem.- 4.7 Implicit Error Computation.- 4.8 Chapter Summary.- 5. Recursive Least-Squares Transversal Algorithms.- 5.1 The Recursive Least-Squares Algorithm.- 5.2 Potter's Square-Root Normalized RLS Algorithm.- 5.3 Update Properties of the RLS Algorithm.- 5.4 Kubin's Selective Memory RLS Algorithms.- 5.5 Fast RLS Transversal Algorithms.- 5.6 Descent Transversal Algorithms.- 5.7 Chapter Summary.- 6. The Ladder Form.- 6.1 The Recursion Formula for Orthogonal Projections.- 6.2 Computing Time-Varying Transversal Predictor Parameters from the Ladder Reflection Coefficients.- 6.3 Stationary Case - The PARCOR Ladder Form.- 6.4 Relationships Between PARCOR Ladder Form and Transversal Predictor.- 6.5 The Feed-BackPARCOR Ladder Form.- 6.6 Frequency Domain Description of PARCOR Ladder Forms.- 6.7 Stability of the Feed-Back PARCOR Ladder Form.- 6.8 Burg's Harmonic Mean PARCOR Ladder Algorithm.- 6.9 Determination of Model Order.- 6.10 Chapter Summary.- 7. Levinson-Type Ladder Algorithms.- 7.1 The Levinson-Durbin Algorithm.- 7.2 Computing the Autocorrelation Coefficients from the PARCOR Ladder Reflection Coefficients - The "Inverse" Levinson-Durbin Algorithm.- 7.3 Some More Properties of Toeplitz Systems and the Levinson-Durbin Algorithm.- 7.4 Split Levinson Algorithms.- 7.5 A Levinson-Type Least-Squares Ladder Estimation Algorithm.- 7.6 The Makhoul Covariance Ladder Algorithm.- 7.7 Chapter Summary.- 8 Covariance Ladder Algorithms.- 8.1 The LeRoux-Gueguen Algorithm.- 8.2 The Cumani Covariance Ladder Algorithm.- 8.3 Recursive Covariance Ladder Algorithms.- 8.4 Split Schur Algorithms.- 8.5 Chapter Summary.- 9. Fast Recursive Least-Squares Ladder Algorithms.- 9.1 The Exact Time-Update Theorem of Projection Operators.- 9.2 The Algorithm of Lee and Morf.- 9.3 Other Forms of Lee's Algorithm.- 9.4 Gradient Adaptive Ladder Algorithms.- 9.5 Lee's Normalized RLS Ladder Algorithm.- 9.6 Chapter Summary.- 10. Special Signal Models and Extensions.- 10.1 Joint Process Estimation.- 10.2 ARMA System Identification.- 10.3 Identification of Vector Autoregressive Processes.- 10.4 Parametric Spectral Estimation.- 10.5 Relationships Between Parameter Estimation and Kalman Filter Theory.- 10.6 Chapter Summary.- 11. Concluding Remarks and Applications.- A.1 Summary of the Most Important Forward/Backward Linear Prediction Relationships.- A.2 New PORLA Algorithms and Their Systolic Array Implementation.- A.3 Vector Case of New PORLA Algorithms.
1. Introduction.- 2. The Linear Prediction Model.- 2.1 The Normal Equations of Linear Prediction.- 2.2 Geometrical Interpretation of the Normal Equations.- 2.3 Statistical Interpretation of the Normal Equations.- 2.4 The Problem of Signal Observation.- 2.5 Recursion Laws of the Normal Equations.- 2.6 Stationarity - A Special Case of Linear Prediction.- 2.7 Covariance Method and Autocorrelation Method.- 2.8 Recursive Windowing Algorithms.- 2.9 Backward Linear Prediction.- 2.10 Chapter Summary.- 3. Classical Algorithms for Symmetric Linear Systems.- 3.1 The Cholesky Decomposition.- 3.2 The QR Decomposition.- 3.3 Some More Principles for Matrix Computations.- 3.4 Chapter Summary.- 4. Recursive Least-Squares Using the QR Decomposition.- 4.1 Formulation of the Growing-Window Recursive Least-Squares Problem.- 4.2 Recursive Least Squares Based on the Givens Reduction.- 4.3 Systolic Array Implementation.- 4.4 Iterative Vector Rotations - The CORDIC Algorithm.- 4.5 Recursive QR Decomposition Using a Second-Order Window.- 4.6 Alternative Formulations of the QRLS Problem.- 4.7 Implicit Error Computation.- 4.8 Chapter Summary.- 5. Recursive Least-Squares Transversal Algorithms.- 5.1 The Recursive Least-Squares Algorithm.- 5.2 Potter's Square-Root Normalized RLS Algorithm.- 5.3 Update Properties of the RLS Algorithm.- 5.4 Kubin's Selective Memory RLS Algorithms.- 5.5 Fast RLS Transversal Algorithms.- 5.6 Descent Transversal Algorithms.- 5.7 Chapter Summary.- 6. The Ladder Form.- 6.1 The Recursion Formula for Orthogonal Projections.- 6.2 Computing Time-Varying Transversal Predictor Parameters from the Ladder Reflection Coefficients.- 6.3 Stationary Case - The PARCOR Ladder Form.- 6.4 Relationships Between PARCOR Ladder Form and Transversal Predictor.- 6.5 The Feed-BackPARCOR Ladder Form.- 6.6 Frequency Domain Description of PARCOR Ladder Forms.- 6.7 Stability of the Feed-Back PARCOR Ladder Form.- 6.8 Burg's Harmonic Mean PARCOR Ladder Algorithm.- 6.9 Determination of Model Order.- 6.10 Chapter Summary.- 7. Levinson-Type Ladder Algorithms.- 7.1 The Levinson-Durbin Algorithm.- 7.2 Computing the Autocorrelation Coefficients from the PARCOR Ladder Reflection Coefficients - The "Inverse" Levinson-Durbin Algorithm.- 7.3 Some More Properties of Toeplitz Systems and the Levinson-Durbin Algorithm.- 7.4 Split Levinson Algorithms.- 7.5 A Levinson-Type Least-Squares Ladder Estimation Algorithm.- 7.6 The Makhoul Covariance Ladder Algorithm.- 7.7 Chapter Summary.- 8 Covariance Ladder Algorithms.- 8.1 The LeRoux-Gueguen Algorithm.- 8.2 The Cumani Covariance Ladder Algorithm.- 8.3 Recursive Covariance Ladder Algorithms.- 8.4 Split Schur Algorithms.- 8.5 Chapter Summary.- 9. Fast Recursive Least-Squares Ladder Algorithms.- 9.1 The Exact Time-Update Theorem of Projection Operators.- 9.2 The Algorithm of Lee and Morf.- 9.3 Other Forms of Lee's Algorithm.- 9.4 Gradient Adaptive Ladder Algorithms.- 9.5 Lee's Normalized RLS Ladder Algorithm.- 9.6 Chapter Summary.- 10. Special Signal Models and Extensions.- 10.1 Joint Process Estimation.- 10.2 ARMA System Identification.- 10.3 Identification of Vector Autoregressive Processes.- 10.4 Parametric Spectral Estimation.- 10.5 Relationships Between Parameter Estimation and Kalman Filter Theory.- 10.6 Chapter Summary.- 11. Concluding Remarks and Applications.- A.1 Summary of the Most Important Forward/Backward Linear Prediction Relationships.- A.2 New PORLA Algorithms and Their Systolic Array Implementation.- A.3 Vector Case of New PORLA Algorithms.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826