Identification of Physical Systems
Applications to Condition Monitoring, Fault Diagnosis, Soft Sensor and Controller Design
Identification of Physical Systems
Applications to Condition Monitoring, Fault Diagnosis, Soft Sensor and Controller Design
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Identification of a physical system deals with the problem ofidentifying its mathematical model using the measured input andoutput data. As the physical system is generally complex,nonlinear, and its input-output data is corrupted noise,there are fundamental theoretical and practical issues that need tobe considered.
Identification of Physical Systems addresses this need,presenting a systematic, unified approach to the problem ofphysical system identification and its practicalapplications. Starting with a least-squares method, theauthors develop various schemes to address the issues of…mehr
- Ludwig MichalskiTemperature Measurement490,99 €
- Myer KutzHandbook of Measurement in Science and Engineering, 2 Volume Set791,99 €
- John H. LillyFuzzy Control and Identification146,99 €
- Zujie FangFundamentals of Optical Fiber Sensors146,99 €
- Robert L. WilliamsLinear State-Space Control Systems174,99 €
- System of Systems Engineering192,99 €
- Patrice MicouinModel Based Systems Engineering189,99 €
-
-
-
Identification of Physical Systems addresses this need,presenting a systematic, unified approach to the problem ofphysical system identification and its practicalapplications. Starting with a least-squares method, theauthors develop various schemes to address the issues of accuracy,variation in the operating regimes, closed loop, and interconnectedsubsystems. Also presented is a non-parametric signal or data-basedscheme to identify a means to provide a quick macroscopic pictureof the system to complement the precise microscopic picture givenby the parametric model-based scheme. Finally, a sequentialintegration of totally different schemes, such as non-parametric,Kalman filter, and parametric model, is developed to meet the speedand accuracy requirement of mission-critical systems.
Key features:
Provides a clear understanding of theoretical and practicalissues in identification and its applications, enabling the readerto grasp a clear understanding of the theory and apply it topractical problems
Offers a self-contained guide by including the backgroundnecessary to understand this interdisciplinary subject
Includes case studies for the application of identification onphysical laboratory scale systems, as well as number ofillustrative examples throughout the book
Identification of Physical Systems is a comprehensivereference for researchers and practitioners working in this fieldand is also a useful source of information for graduate students inelectrical, computer, biomedical, chemical, and mechanicalengineering.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
- Produktdetails
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 536
- Erscheinungstermin: 12. Mai 2014
- Englisch
- Abmessung: 251mm x 172mm x 32mm
- Gewicht: 947g
- ISBN-13: 9781119990123
- ISBN-10: 1119990122
- Artikelnr.: 38475248
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 536
- Erscheinungstermin: 12. Mai 2014
- Englisch
- Abmessung: 251mm x 172mm x 32mm
- Gewicht: 947g
- ISBN-13: 9781119990123
- ISBN-10: 1119990122
- Artikelnr.: 38475248
Scalars while y is a Nx1 Vector 128 3.4.2 Vector Case: is a Mx1 Vector 129 3.4.3 Illustrative Examples: Cramér-Rao Inequality 130 3.4.4 Fisher Information 138 3.5 Maximum Likelihood Estimation 139 3.5.1 Formulation of Maximum Likelihood Estimation 139 3.5.2 Illustrative Examples: Maximum Likelihood Estimation of Mean or Median 141 3.5.3 Illustrative Examples: Maximum Likelihood Estimation of Mean and Variance 148 3.5.4 Properties of Maximum Likelihood Estimator 154 3.6 Summary 154 3.7 Appendix: Cauchy-Schwarz Inequality 157 3.8 Appendix: Cram
er-Rao Lower Bound 157 3.8.1 Scalar Case 158 3.8.2 Vector Case 160 3.9 Appendix: Fisher Information: Cauchy PDF 161 3.10 Appendix: Fisher Information for i.i.d. PDF 161 3.11 Appendix: Projection Operator 162 3.12 Appendix: Fisher Information: Part Gauss-Part Laplace 164 Problem 165 References 165 Further Readings 165 4 Estimation of Random Parameter 167 4.1 Overview 167 4.2 Minimum Mean-Squares Estimator (MMSE): Scalar Case 167 4.2.1 Conditional Mean: Optimal Estimator 168 4.3 MMSE Estimator: Vector Case 169 4.3.1 Covariance of the Estimation Error 171 4.3.2 Conditional Expectation and Its Properties 172 4.4 Expression for Conditional Mean 172 4.4.1 MMSE Estimator: Gaussian Random Variables 173 4.4.2 MMSE Estimator: Unknown is Gaussian and Measurement Non-Gaussian 174 4.4.3 The MMSE Estimator for Gaussian PDF 176 4.4.4 Illustrative Examples 178 4.5 Summary 183 4.6 Appendix: Non-Gaussian Measurement PDF 184 4.6.1 Expression for Conditional Expectation 184 4.6.2 Conditional Expectation for Gaussian x and Non-Gaussian y 185 References 188 Further Readings 188 5 Linear Least-Squares Estimation 189 5.1 Overview 189 5.2 Linear Least-Squares Approach 189 5.2.1 Linear Algebraic Model 190 5.2.2 Least-Squares Method 190 5.2.3 Objective Function 191 5.2.4 Optimal Least-Squares Estimate: Normal Equation 193 5.2.5 Geometric Interpretation of Least-Squares Estimate: Orthogonality Principle 194 5.3 Performance of the Least-Squares Estimator 195 5.3.1 Unbiasedness of the Least-Squares Estimate 195 5.3.2 Covariance of the Estimation Error 197 5.3.3 Properties of the Residual 198 5.3.4 Model and Systemic Errors: Bias and the Variance Errors 201 5.4 Illustrative Examples 205 5.4.1 Non-Zero-Mean Measurement Noise 209 5.5 Cram
er-Rao Lower Bound 209 5.6 Maximum Likelihood Estimation 210 5.6.1 Illustrative Examples 210 5.7 Least-Squares Solution of Under-Determined System 212 5.8 Singular Value Decomposition 213 5.8.1 Illustrative Example: Singular and Eigenvalues of Square Matrices 215 5.8.2 Computation of Least-Squares Estimate Using the SVD 216 5.9 Summary 218 5.10 Appendix: Properties of the Pseudo-Inverse and the Projection Operator 221 5.10.1 Over-Determined System 221 5.10.2 Under-Determined System 222 5.11 Appendix: Positive Definite Matrices 222 5.12 Appendix: Singular Value Decomposition of a Matrix 223 5.12.1 SVD and Eigendecompositions 225 5.12.2 Matrix Norms 226 5.12.3 Least Squares Estimate for Any Arbitrary Data Matrix H 226 5.12.4 Pseudo-Inverse of Any Arbitrary Matrix 228 5.12.5 Bounds on the Residual and the Covariance of the Estimation Error 228 5.13 Appendix: Least-Squares Solution for Under-Determined System 228 5.14 Appendix: Computation of Least-Squares Estimate Using the SVD 229 References 229 Further Readings 230 6 Kalman Filter 231 6.1 Overview 231 6.2 Mathematical Model of the System 233 6.2.1 Model of the Plant 233 6.2.2 Model of the Disturbance and Measurement Noise 233 6.2.3 Integrated Model of the System 234 6.2.4 Expression for the Output of the Integrated System 235 6.2.5 Linear Regression Model 235 6.2.6 Observability 236 6.3 Internal Model Principle 236 6.3.1 Controller Design Using the Internal Model Principle 237 6.3.2 Internal Model (IM) of a Signal 237 6.3.3 Controller Design 238 6.3.4 Illustrative Example: Controller Design 241 6.4 Duality Between Controller and an Estimator Design 244 6.4.1 Estimation Problem 244 6.4.2 Estimator Design 244 6.5 Observer: Estimator for the States of a System 246 6.5.1 Problem Formulation 246 6.5.2 The Internal Model of the Output 246 6.5.3 Illustrative Example: Observer with Internal Model Structure 247 6.6 Kalman Filter: Estimator of the States of a Stochastic System 250 6.6.1 Objectives of the Kalman Filter 251 6.6.2 Necessary Structure of the Kalman Filter 252 6.6.3 Internal Model of a Random Process 252 6.6.4 Illustrative Example: Role of an Internal Model 254 6.6.5 Model of the Kalman Filter 255 6.6.6 Optimal Kalman Filter 256 6.6.7 Optimal Scalar Kalman Filter 256 6.6.8 Optimal Kalman Gain 260 6.6.9 Comparison of the Kalman Filters: Integrated and Plant Models 260 6.6.10 Steady-State Kalman Filter 261 6.6.11 Internal Model and Statistical Approaches 261 6.6.12 Optimal Information Fusion 262 6.6.13 Role of the Ratio of Variances 262 6.6.14 Fusion of Information from the Model and the Measurement 263 6.6.15 Illustrative Example: Fusion of Information 264 6.6.16 Orthogonal Properties of the Kalman Filter 266 6.6.17 Ensemble and Time Averages 266 6.6.18 Illustrative Example: Orthogonality Properties of the Kalman Filter 267 6.7 The Residual of the Kalman Filter with Model Mismatch and Non-Optimal Gain 267 6.7.1 State Estimation Error with Model Mismatch 268 6.7.2 Illustrative Example: Residual with Model Mismatch and Non-Optimal Gain 271 6.8 Summary 274 6.9 Appendix: Estimation Error Covariance and the Kalman Gain 277 6.10 Appendix: The Role of the Ratio of Plant and the Measurement Noise Variances 279 6.11 Appendix: Orthogonal Properties of the Kalman Filter 279 6.11.1 Span of a Matrix 284 6.11.2 Transfer Function Formulae 284 6.12 Appendix: Kalman Filter Residual with Model Mismatch 285 References 287 7 System Identification 289 7.1 Overview 289 7.2 System Model 291 7.2.1 State-Space Model 291 7.2.2 Assumptions 292 7.2.3 Frequency-Domain Model 292 7.2.4 Input Signal for System Identification 293 7.3 Kalman Filter-Based Identification Model Structure 297 7.3.1 Expression for the Kalman Filter Residual 298 7.3.2 Direct Form or Colored Noise Form 300 7.3.3 Illustrative Examples: Process, Predictor, and Innovation Forms 302 7.3.4 Models for System Identification 304 7.3.5 Identification Methods 305 7.4 Least-Squares Method 307 7.4.1 Linear Matrix Model: Batch Processing 308 7.4.2 The Least-Squares Estimate 308 7.4.3 Quality of the Least-Squares Estimate 312 7.4.4 Illustrative Example of the Least-Squares Identification 313 7.4.5 Computation of the Estimates Using Singular Value Decomposition 315 7.4.6 Recursive Least-Squares Identification 316 7.5 High-Order Least-Squares Method 318 7.5.1 Justification for a High-Order Model 318 7.5.2 Derivation of a Reduced-Order Model 323 7.5.3 Formulation of Model Reduction 324 7.5.4 Model Order Selection 324 7.5.5 Illustrative Example of High-Order Least-Squares Method 325 7.5.6 Performance of the High-Order Least-Squares Scheme 326 7.6 The Prediction Error Method 327 7.6.1 Residual Model 327 7.6.2 Objective Function 327 7.6.3 Iterative Prediction Algorithm 328 7.6.4 Family of Prediction Error Algorithms 330 7.7 Comparison of High-Order Least-Squares and the Prediction Error Methods 330 7.7.1 Illustrative Example: LS, High Order LS, and PEM 331 7.8 Subspace Identification Method 334 7.8.1 Identification Model: Predictor Form of the Kalman Filter 334 7.9 Summary 340 7.10 Appendix: Performance of the Least-Squares Approach 347 7.10.1 Correlated Error 347 7.10.2 Uncorrelated Error 347 7.10.3 Correlation of the Error and the Data Matrix 348 7.10.4 Residual Analysis 350 7.11 Appendix: Frequency-Weighted Model Order Reduction 352 7.11.1 Implementation of the Frequency-Weighted Estimator 354 7.11.2 Selection of the Frequencies 354 References 354 8 Closed Loop Identification 357 8.1 Overview 357 8.1.1 Kalman Filter-Based Identification Model 358 8.1.2 Closed-Loop Identification Approaches 358 8.2 Closed-Loop System 359 8.2.1 Two-Stage and Direct Approaches 359 8.3 Model of the Single Input Multi-Output System 360 8.3.1 State- Space Model of the Subsystem 360 8.3.2 State-Space Model of the Overall System 361 8.3.3 Transfer Function Model 361 8.3.4 Illustrative Example: Closed-Loop Sensor Network 362 8.4 Kalman Filter-Based Identification Model 364 8.4.1 State-Space Model of the Kalman Filter 364 8.4.2 Residual Model 365 8.4.3 The Identification Model 366 8.5 Closed-Loop Identification Schemes 366 8.5.1 The High-Order Least-Squares Method 366 8.6 Second Stage of the Two-Stage Identification 372 8.7 Evaluation on a Simulated Closed-Loop Sensor Net 372 8.7.1 The Performance of the Stage I Identification Scheme 372 8.7.2 The Performance of the Stage II Identification Scheme 373 8.8 Summary 374 References 377 9 Fault Diagnosis 379 9.1 Overview 379 9.1.1 Identification for Fault Diagnosis 380 9.1.2 Residual Generation 380 9.1.3 Fault Detection 380 9.1.4 Fault Isolation 381 9.2 Mathematical Model of the System 381 9.2.1 Linear Regression Model: Nominal System 382 9.3 Model of the Kalman Filter 382 9.4 Modeling of Faults 383 9.4.1 Linear Regression Model 383 9.5 Diagnostic Parameters and the Feature Vector 384 9.6 Illustrative Example 386 9.6.1 Mathematical Model 386 9.6.2 Feature Vector and the Influence Vectors 387 9.7 Residual of the Kalman Filter 388 9.7.1 Diagnostic Model 389 9.7.2 Key Properties of the Residual 389 9.7.3 The Role of the Kalman Filter in Fault Diagnosis 389 9.8 Fault Diagnosis 390 9.9 Fault Detection: Bayes Decision Strategy 390 9.9.1 Pattern Classification Problem: Fault Detection 391 9.9.2 Generalized Likelihood Ratio Test 392 9.9.3 Maximum Likelihood Estimate 392 9.9.4 Decision Strategy 394 9.9.5 Other Test Statistics 395 9.10 Evaluation of Detection Strategy on Simulated System 396 9.11 Formulation of Fault Isolation Problem 396 9.11.1 Pattern Classification Problem: Fault Isolation 397 9.11.2 Formulation of the Fault Isolation Scheme 398 9.11.3 Fault Isolation Tasks 399 9.12 Estimation of the Influence Vectors and Additive Fault 399 9.12.1 Parameter-Perturbed Experiment 400 9.12.2 Least-Squares Estimates 401 9.13 Fault Isolation Scheme 401 9.13.1 Sequential Fault Isolation Scheme 402 9.13.2 Isolation of the Fault 403 9.14 Isolation of a Single Fault 403 9.14.1 Fault Discriminant Function 403 9.14.2 Performance of Fault Isolation Scheme 404 9.14.3 Performance Issues and Guidelines 405 9.15 Emulators for Offline Identification 406 9.15.1 Examples of Emulators 407 9.15.2 Emulators for Multiple Input-Multiple-Output System 407 9.15.3 Role of an Emulator 408 9.15.4 Criteria for Selection 409 9.16 Illustrative Example 409 9.16.1 Mathematical Model 409 9.16.2 Selection of Emulators 410 9.16.3 Transfer Function Model 410 9.16.4 Role of the Static Emulators 411 9.16.5 Role of the Dynamic Emulator 412 9.17 Overview of Fault Diagnosis Scheme 414 9.18 Evaluation on a Simulated Example 414 9.18.1 The Kalman Filter 414 9.18.2 The Kalman Filter Residual and Its Auto-correlation 414 9.18.3 Estimation of the Influence Vectors 416 9.18.4 Fault Size Estimation 416 9.18.5 Fault Isolation 417 9.19 Summary 418 9.20 Appendix: Bayesian Multiple Composite Hypotheses Testing Problem 422 9.21 Appendix: Discriminant Function for Fault Isolation 423 9.22 Appendix: Log-Likelihood Ratio for a Sinusoid and a Constant 424 9.22.1 Determination of af, bf , and cf 424 9.22.2 Determination of the Optimal Cost 425 References 426 10 Modeling and Identification of Physical Systems 427 10.1 Overview 427 10.2 Magnetic Levitation System 427 10.2.1 Mathematic Model of a Magnetic Levitation System 427 10.2.2 Linearized Model 429 10.2.3 Discrete-Time Equivalent of Continuous-Time Models 430 10.2.4 Identification Approach 432 10.2.5 Identification of the Magnetic Levitation System 433 10.3 Two-Tank Process Control System 436 10.3.1 Model of the Two-Tank System 436 10.3.2 Identification of the Closed-Loop Two-Tank System 438 10.4 Position Control System 442 10.4.1 Experimental Setup 442 10.4.2 Mathematical Model of the Position Control System 442 10.5 Summary 444 References 446 11 Fault Diagnosis of Physical Systems 447 11.1 Overview 447 11.2 Two-Tank Physical Process Control System 448 11.2.1 Objective 448 11.2.2 Identification of the Physical System 448 11.2.3 Fault Detection 449 11.2.4 Fault Isolation 451 11.3 Position Control System 452 11.3.1 The Objective 452 11.3.2 Identification of the Physical System 452 11.3.3 Detection of Fault 455 11.3.4 Fault Isolation 455 11.3.5 Fault Isolability 455 11.4 Summary 457 References 457 12 Fault Diagnosis of a Sensor Network 459 12.1 Overview 459 12.2 Problem Formulation 461 12.3 Fault Diagnosis Using a Bank of Kalman Filters 461 12.4 Kalman Filter for Pairs of Measurements 462 12.5 Kalman Filter for the Reference Input-Measurement Pair 463 12.6 Kalman Filter Residual: A Model Mismatch Indicator 463 12.6.1 Residual for a Pair of Measurements 463 12.7 Bayes Decision Strategy 464 12.8 Truth Table of Binary Decisions 465 12.9 Illustrative Example 467 12.10 Evaluation on a Physical Process Control System 469 12.11 Fault Detection and Isolation 470 12.11.1 Comparison with Other Approaches 473 12.12 Summary 474 12.13 Appendix 475 12.13.1 Map Relating yi(z) to yj(z) 475 12.13.2 Map Relating r(z) to yj(z) 476 References 477 13 Soft Sensor 479 13.1 Review 479 13.1.1 Benefits of a Soft Sensor 479 13.1.2 Kalman Filter 479 13.1.3 Reliable Identification of the System 480 13.1.4 Robust Controller Design 480 13.1.5 Fault Tolerant System 481 13.2 Mathematical Formulation 481 13.2.1 Transfer Function Model 482 13.2.2 Uncertainty Model 482 13.3 Identification of the System 483 13.3.1 Perturbed Parameter Experiment 484 13.3.2 Least-Squares Estimation 484 13.3.3 Selection of the Model Order 485 13.3.4 Identified Nominal Model 485 13.3.5 Illustrative Example 486 13.4 Model of the Kalman Filter 488 13.4.1 Role of the Kalman Filter 488 13.4.2 Model of the Kalman Filter 489 13.4.3 Augmented Model of the Plant and the Kalman Filter 489 13.5 Robust Controller Design 489 13.5.1 Objective 489 13.5.2 Augmented Model 490 13.5.3 Closed-Loop Performance and Stability 490 13.5.4 Uncertainty Model 491 13.5.5 Mixed-sensitivity Optimization Problem 492 13.5.6 State-Space Model of the Robust Control System 493 13.6 High Performance and Fault Tolerant Control System 494 13.6.1 Residual and Model-mismatch 494 13.6.2 Bayes Decision Strategy 495 13.6.3 High Performance Control System 495 13.6.4 Fault-Tolerant Control System 496 13.7 Evaluation on a Simulated System: Soft Sensor 496 13.7.1 Offline Identification 497 13.7.2 Identified Model of the Plant 497 13.7.3 Mixed-sensitivity Optimization Problem 498 13.7.4 Performance and Robustness 499 13.7.5 Status Monitoring 499 13.8 Evaluation on a Physical Velocity Control System 500 13.9 Conclusions 502 13.10 Summary 503 References 507 Index 509
Scalars while y is a Nx1 Vector 128 3.4.2 Vector Case: is a Mx1 Vector 129 3.4.3 Illustrative Examples: Cramér-Rao Inequality 130 3.4.4 Fisher Information 138 3.5 Maximum Likelihood Estimation 139 3.5.1 Formulation of Maximum Likelihood Estimation 139 3.5.2 Illustrative Examples: Maximum Likelihood Estimation of Mean or Median 141 3.5.3 Illustrative Examples: Maximum Likelihood Estimation of Mean and Variance 148 3.5.4 Properties of Maximum Likelihood Estimator 154 3.6 Summary 154 3.7 Appendix: Cauchy-Schwarz Inequality 157 3.8 Appendix: Cram
er-Rao Lower Bound 157 3.8.1 Scalar Case 158 3.8.2 Vector Case 160 3.9 Appendix: Fisher Information: Cauchy PDF 161 3.10 Appendix: Fisher Information for i.i.d. PDF 161 3.11 Appendix: Projection Operator 162 3.12 Appendix: Fisher Information: Part Gauss-Part Laplace 164 Problem 165 References 165 Further Readings 165 4 Estimation of Random Parameter 167 4.1 Overview 167 4.2 Minimum Mean-Squares Estimator (MMSE): Scalar Case 167 4.2.1 Conditional Mean: Optimal Estimator 168 4.3 MMSE Estimator: Vector Case 169 4.3.1 Covariance of the Estimation Error 171 4.3.2 Conditional Expectation and Its Properties 172 4.4 Expression for Conditional Mean 172 4.4.1 MMSE Estimator: Gaussian Random Variables 173 4.4.2 MMSE Estimator: Unknown is Gaussian and Measurement Non-Gaussian 174 4.4.3 The MMSE Estimator for Gaussian PDF 176 4.4.4 Illustrative Examples 178 4.5 Summary 183 4.6 Appendix: Non-Gaussian Measurement PDF 184 4.6.1 Expression for Conditional Expectation 184 4.6.2 Conditional Expectation for Gaussian x and Non-Gaussian y 185 References 188 Further Readings 188 5 Linear Least-Squares Estimation 189 5.1 Overview 189 5.2 Linear Least-Squares Approach 189 5.2.1 Linear Algebraic Model 190 5.2.2 Least-Squares Method 190 5.2.3 Objective Function 191 5.2.4 Optimal Least-Squares Estimate: Normal Equation 193 5.2.5 Geometric Interpretation of Least-Squares Estimate: Orthogonality Principle 194 5.3 Performance of the Least-Squares Estimator 195 5.3.1 Unbiasedness of the Least-Squares Estimate 195 5.3.2 Covariance of the Estimation Error 197 5.3.3 Properties of the Residual 198 5.3.4 Model and Systemic Errors: Bias and the Variance Errors 201 5.4 Illustrative Examples 205 5.4.1 Non-Zero-Mean Measurement Noise 209 5.5 Cram
er-Rao Lower Bound 209 5.6 Maximum Likelihood Estimation 210 5.6.1 Illustrative Examples 210 5.7 Least-Squares Solution of Under-Determined System 212 5.8 Singular Value Decomposition 213 5.8.1 Illustrative Example: Singular and Eigenvalues of Square Matrices 215 5.8.2 Computation of Least-Squares Estimate Using the SVD 216 5.9 Summary 218 5.10 Appendix: Properties of the Pseudo-Inverse and the Projection Operator 221 5.10.1 Over-Determined System 221 5.10.2 Under-Determined System 222 5.11 Appendix: Positive Definite Matrices 222 5.12 Appendix: Singular Value Decomposition of a Matrix 223 5.12.1 SVD and Eigendecompositions 225 5.12.2 Matrix Norms 226 5.12.3 Least Squares Estimate for Any Arbitrary Data Matrix H 226 5.12.4 Pseudo-Inverse of Any Arbitrary Matrix 228 5.12.5 Bounds on the Residual and the Covariance of the Estimation Error 228 5.13 Appendix: Least-Squares Solution for Under-Determined System 228 5.14 Appendix: Computation of Least-Squares Estimate Using the SVD 229 References 229 Further Readings 230 6 Kalman Filter 231 6.1 Overview 231 6.2 Mathematical Model of the System 233 6.2.1 Model of the Plant 233 6.2.2 Model of the Disturbance and Measurement Noise 233 6.2.3 Integrated Model of the System 234 6.2.4 Expression for the Output of the Integrated System 235 6.2.5 Linear Regression Model 235 6.2.6 Observability 236 6.3 Internal Model Principle 236 6.3.1 Controller Design Using the Internal Model Principle 237 6.3.2 Internal Model (IM) of a Signal 237 6.3.3 Controller Design 238 6.3.4 Illustrative Example: Controller Design 241 6.4 Duality Between Controller and an Estimator Design 244 6.4.1 Estimation Problem 244 6.4.2 Estimator Design 244 6.5 Observer: Estimator for the States of a System 246 6.5.1 Problem Formulation 246 6.5.2 The Internal Model of the Output 246 6.5.3 Illustrative Example: Observer with Internal Model Structure 247 6.6 Kalman Filter: Estimator of the States of a Stochastic System 250 6.6.1 Objectives of the Kalman Filter 251 6.6.2 Necessary Structure of the Kalman Filter 252 6.6.3 Internal Model of a Random Process 252 6.6.4 Illustrative Example: Role of an Internal Model 254 6.6.5 Model of the Kalman Filter 255 6.6.6 Optimal Kalman Filter 256 6.6.7 Optimal Scalar Kalman Filter 256 6.6.8 Optimal Kalman Gain 260 6.6.9 Comparison of the Kalman Filters: Integrated and Plant Models 260 6.6.10 Steady-State Kalman Filter 261 6.6.11 Internal Model and Statistical Approaches 261 6.6.12 Optimal Information Fusion 262 6.6.13 Role of the Ratio of Variances 262 6.6.14 Fusion of Information from the Model and the Measurement 263 6.6.15 Illustrative Example: Fusion of Information 264 6.6.16 Orthogonal Properties of the Kalman Filter 266 6.6.17 Ensemble and Time Averages 266 6.6.18 Illustrative Example: Orthogonality Properties of the Kalman Filter 267 6.7 The Residual of the Kalman Filter with Model Mismatch and Non-Optimal Gain 267 6.7.1 State Estimation Error with Model Mismatch 268 6.7.2 Illustrative Example: Residual with Model Mismatch and Non-Optimal Gain 271 6.8 Summary 274 6.9 Appendix: Estimation Error Covariance and the Kalman Gain 277 6.10 Appendix: The Role of the Ratio of Plant and the Measurement Noise Variances 279 6.11 Appendix: Orthogonal Properties of the Kalman Filter 279 6.11.1 Span of a Matrix 284 6.11.2 Transfer Function Formulae 284 6.12 Appendix: Kalman Filter Residual with Model Mismatch 285 References 287 7 System Identification 289 7.1 Overview 289 7.2 System Model 291 7.2.1 State-Space Model 291 7.2.2 Assumptions 292 7.2.3 Frequency-Domain Model 292 7.2.4 Input Signal for System Identification 293 7.3 Kalman Filter-Based Identification Model Structure 297 7.3.1 Expression for the Kalman Filter Residual 298 7.3.2 Direct Form or Colored Noise Form 300 7.3.3 Illustrative Examples: Process, Predictor, and Innovation Forms 302 7.3.4 Models for System Identification 304 7.3.5 Identification Methods 305 7.4 Least-Squares Method 307 7.4.1 Linear Matrix Model: Batch Processing 308 7.4.2 The Least-Squares Estimate 308 7.4.3 Quality of the Least-Squares Estimate 312 7.4.4 Illustrative Example of the Least-Squares Identification 313 7.4.5 Computation of the Estimates Using Singular Value Decomposition 315 7.4.6 Recursive Least-Squares Identification 316 7.5 High-Order Least-Squares Method 318 7.5.1 Justification for a High-Order Model 318 7.5.2 Derivation of a Reduced-Order Model 323 7.5.3 Formulation of Model Reduction 324 7.5.4 Model Order Selection 324 7.5.5 Illustrative Example of High-Order Least-Squares Method 325 7.5.6 Performance of the High-Order Least-Squares Scheme 326 7.6 The Prediction Error Method 327 7.6.1 Residual Model 327 7.6.2 Objective Function 327 7.6.3 Iterative Prediction Algorithm 328 7.6.4 Family of Prediction Error Algorithms 330 7.7 Comparison of High-Order Least-Squares and the Prediction Error Methods 330 7.7.1 Illustrative Example: LS, High Order LS, and PEM 331 7.8 Subspace Identification Method 334 7.8.1 Identification Model: Predictor Form of the Kalman Filter 334 7.9 Summary 340 7.10 Appendix: Performance of the Least-Squares Approach 347 7.10.1 Correlated Error 347 7.10.2 Uncorrelated Error 347 7.10.3 Correlation of the Error and the Data Matrix 348 7.10.4 Residual Analysis 350 7.11 Appendix: Frequency-Weighted Model Order Reduction 352 7.11.1 Implementation of the Frequency-Weighted Estimator 354 7.11.2 Selection of the Frequencies 354 References 354 8 Closed Loop Identification 357 8.1 Overview 357 8.1.1 Kalman Filter-Based Identification Model 358 8.1.2 Closed-Loop Identification Approaches 358 8.2 Closed-Loop System 359 8.2.1 Two-Stage and Direct Approaches 359 8.3 Model of the Single Input Multi-Output System 360 8.3.1 State- Space Model of the Subsystem 360 8.3.2 State-Space Model of the Overall System 361 8.3.3 Transfer Function Model 361 8.3.4 Illustrative Example: Closed-Loop Sensor Network 362 8.4 Kalman Filter-Based Identification Model 364 8.4.1 State-Space Model of the Kalman Filter 364 8.4.2 Residual Model 365 8.4.3 The Identification Model 366 8.5 Closed-Loop Identification Schemes 366 8.5.1 The High-Order Least-Squares Method 366 8.6 Second Stage of the Two-Stage Identification 372 8.7 Evaluation on a Simulated Closed-Loop Sensor Net 372 8.7.1 The Performance of the Stage I Identification Scheme 372 8.7.2 The Performance of the Stage II Identification Scheme 373 8.8 Summary 374 References 377 9 Fault Diagnosis 379 9.1 Overview 379 9.1.1 Identification for Fault Diagnosis 380 9.1.2 Residual Generation 380 9.1.3 Fault Detection 380 9.1.4 Fault Isolation 381 9.2 Mathematical Model of the System 381 9.2.1 Linear Regression Model: Nominal System 382 9.3 Model of the Kalman Filter 382 9.4 Modeling of Faults 383 9.4.1 Linear Regression Model 383 9.5 Diagnostic Parameters and the Feature Vector 384 9.6 Illustrative Example 386 9.6.1 Mathematical Model 386 9.6.2 Feature Vector and the Influence Vectors 387 9.7 Residual of the Kalman Filter 388 9.7.1 Diagnostic Model 389 9.7.2 Key Properties of the Residual 389 9.7.3 The Role of the Kalman Filter in Fault Diagnosis 389 9.8 Fault Diagnosis 390 9.9 Fault Detection: Bayes Decision Strategy 390 9.9.1 Pattern Classification Problem: Fault Detection 391 9.9.2 Generalized Likelihood Ratio Test 392 9.9.3 Maximum Likelihood Estimate 392 9.9.4 Decision Strategy 394 9.9.5 Other Test Statistics 395 9.10 Evaluation of Detection Strategy on Simulated System 396 9.11 Formulation of Fault Isolation Problem 396 9.11.1 Pattern Classification Problem: Fault Isolation 397 9.11.2 Formulation of the Fault Isolation Scheme 398 9.11.3 Fault Isolation Tasks 399 9.12 Estimation of the Influence Vectors and Additive Fault 399 9.12.1 Parameter-Perturbed Experiment 400 9.12.2 Least-Squares Estimates 401 9.13 Fault Isolation Scheme 401 9.13.1 Sequential Fault Isolation Scheme 402 9.13.2 Isolation of the Fault 403 9.14 Isolation of a Single Fault 403 9.14.1 Fault Discriminant Function 403 9.14.2 Performance of Fault Isolation Scheme 404 9.14.3 Performance Issues and Guidelines 405 9.15 Emulators for Offline Identification 406 9.15.1 Examples of Emulators 407 9.15.2 Emulators for Multiple Input-Multiple-Output System 407 9.15.3 Role of an Emulator 408 9.15.4 Criteria for Selection 409 9.16 Illustrative Example 409 9.16.1 Mathematical Model 409 9.16.2 Selection of Emulators 410 9.16.3 Transfer Function Model 410 9.16.4 Role of the Static Emulators 411 9.16.5 Role of the Dynamic Emulator 412 9.17 Overview of Fault Diagnosis Scheme 414 9.18 Evaluation on a Simulated Example 414 9.18.1 The Kalman Filter 414 9.18.2 The Kalman Filter Residual and Its Auto-correlation 414 9.18.3 Estimation of the Influence Vectors 416 9.18.4 Fault Size Estimation 416 9.18.5 Fault Isolation 417 9.19 Summary 418 9.20 Appendix: Bayesian Multiple Composite Hypotheses Testing Problem 422 9.21 Appendix: Discriminant Function for Fault Isolation 423 9.22 Appendix: Log-Likelihood Ratio for a Sinusoid and a Constant 424 9.22.1 Determination of af, bf , and cf 424 9.22.2 Determination of the Optimal Cost 425 References 426 10 Modeling and Identification of Physical Systems 427 10.1 Overview 427 10.2 Magnetic Levitation System 427 10.2.1 Mathematic Model of a Magnetic Levitation System 427 10.2.2 Linearized Model 429 10.2.3 Discrete-Time Equivalent of Continuous-Time Models 430 10.2.4 Identification Approach 432 10.2.5 Identification of the Magnetic Levitation System 433 10.3 Two-Tank Process Control System 436 10.3.1 Model of the Two-Tank System 436 10.3.2 Identification of the Closed-Loop Two-Tank System 438 10.4 Position Control System 442 10.4.1 Experimental Setup 442 10.4.2 Mathematical Model of the Position Control System 442 10.5 Summary 444 References 446 11 Fault Diagnosis of Physical Systems 447 11.1 Overview 447 11.2 Two-Tank Physical Process Control System 448 11.2.1 Objective 448 11.2.2 Identification of the Physical System 448 11.2.3 Fault Detection 449 11.2.4 Fault Isolation 451 11.3 Position Control System 452 11.3.1 The Objective 452 11.3.2 Identification of the Physical System 452 11.3.3 Detection of Fault 455 11.3.4 Fault Isolation 455 11.3.5 Fault Isolability 455 11.4 Summary 457 References 457 12 Fault Diagnosis of a Sensor Network 459 12.1 Overview 459 12.2 Problem Formulation 461 12.3 Fault Diagnosis Using a Bank of Kalman Filters 461 12.4 Kalman Filter for Pairs of Measurements 462 12.5 Kalman Filter for the Reference Input-Measurement Pair 463 12.6 Kalman Filter Residual: A Model Mismatch Indicator 463 12.6.1 Residual for a Pair of Measurements 463 12.7 Bayes Decision Strategy 464 12.8 Truth Table of Binary Decisions 465 12.9 Illustrative Example 467 12.10 Evaluation on a Physical Process Control System 469 12.11 Fault Detection and Isolation 470 12.11.1 Comparison with Other Approaches 473 12.12 Summary 474 12.13 Appendix 475 12.13.1 Map Relating yi(z) to yj(z) 475 12.13.2 Map Relating r(z) to yj(z) 476 References 477 13 Soft Sensor 479 13.1 Review 479 13.1.1 Benefits of a Soft Sensor 479 13.1.2 Kalman Filter 479 13.1.3 Reliable Identification of the System 480 13.1.4 Robust Controller Design 480 13.1.5 Fault Tolerant System 481 13.2 Mathematical Formulation 481 13.2.1 Transfer Function Model 482 13.2.2 Uncertainty Model 482 13.3 Identification of the System 483 13.3.1 Perturbed Parameter Experiment 484 13.3.2 Least-Squares Estimation 484 13.3.3 Selection of the Model Order 485 13.3.4 Identified Nominal Model 485 13.3.5 Illustrative Example 486 13.4 Model of the Kalman Filter 488 13.4.1 Role of the Kalman Filter 488 13.4.2 Model of the Kalman Filter 489 13.4.3 Augmented Model of the Plant and the Kalman Filter 489 13.5 Robust Controller Design 489 13.5.1 Objective 489 13.5.2 Augmented Model 490 13.5.3 Closed-Loop Performance and Stability 490 13.5.4 Uncertainty Model 491 13.5.5 Mixed-sensitivity Optimization Problem 492 13.5.6 State-Space Model of the Robust Control System 493 13.6 High Performance and Fault Tolerant Control System 494 13.6.1 Residual and Model-mismatch 494 13.6.2 Bayes Decision Strategy 495 13.6.3 High Performance Control System 495 13.6.4 Fault-Tolerant Control System 496 13.7 Evaluation on a Simulated System: Soft Sensor 496 13.7.1 Offline Identification 497 13.7.2 Identified Model of the Plant 497 13.7.3 Mixed-sensitivity Optimization Problem 498 13.7.4 Performance and Robustness 499 13.7.5 Status Monitoring 499 13.8 Evaluation on a Physical Velocity Control System 500 13.9 Conclusions 502 13.10 Summary 503 References 507 Index 509