Pierre Borne, Dumitru Popescu, Florin Gh. Filip, Dan Stefanoiu
Optimization in Engineering Sciences
Exact Methods
Pierre Borne, Dumitru Popescu, Florin Gh. Filip, Dan Stefanoiu
Optimization in Engineering Sciences
Exact Methods
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The purpose of this book is to present the main methods ofstatic and dynamic optimization. It has been written within theframework of the European Union project - ERRIC (EmpoweringRomanian Research on Intelligent Information Technologies), fundedby the EU's FP7 Research Potential program and developed incooperation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (withadditional references) this book allows the interested reader toexplore various methods of implementation such as linearprogramming, nonlinear programming - particularly…mehr
Andere Kunden interessierten sich auch für
- Marcel JuferElectric Drives211,99 €
- Philippe FeyelLoop-Shaping Robust Control186,99 €
- Chang-liang XiaPermanent Magnet Brushless DC Motor Drives and Controls268,99 €
- John DixonSuspension Geometry and Computation178,99 €
- John M VanceMachinery Vibration and Rotordynamics212,99 €
- Hichem AriouiDriving Simulation189,99 €
- K. T. ChauElectric Vehicle Machines and Drives167,99 €
-
-
-
The purpose of this book is to present the main methods ofstatic and dynamic optimization. It has been written within theframework of the European Union project - ERRIC (EmpoweringRomanian Research on Intelligent Information Technologies), fundedby the EU's FP7 Research Potential program and developed incooperation between French and Romanian teaching researchers.
Through the principles of various proposed algorithms (withadditional references) this book allows the interested reader toexplore various methods of implementation such as linearprogramming, nonlinear programming - particularly importantgiven the wide variety of existing algorithms, dynamic programmingwith various application examples and Hopfield networks. The bookexamines optimization in relation to systems identification;optimization of dynamic systems with particular application toprocess control; optimization of large scale and complex systems;optimization and information systems.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Through the principles of various proposed algorithms (withadditional references) this book allows the interested reader toexplore various methods of implementation such as linearprogramming, nonlinear programming - particularly importantgiven the wide variety of existing algorithms, dynamic programmingwith various application examples and Hopfield networks. The bookexamines optimization in relation to systems identification;optimization of dynamic systems with particular application toprocess control; optimization of large scale and complex systems;optimization and information systems.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- ISTE
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 336
- Erscheinungstermin: 26. Dezember 2012
- Englisch
- Abmessung: 234mm x 155mm x 23mm
- Gewicht: 872g
- ISBN-13: 9781848214323
- ISBN-10: 1848214324
- Artikelnr.: 36725922
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- ISTE
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 336
- Erscheinungstermin: 26. Dezember 2012
- Englisch
- Abmessung: 234mm x 155mm x 23mm
- Gewicht: 872g
- ISBN-13: 9781848214323
- ISBN-10: 1848214324
- Artikelnr.: 36725922
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
Pierre Borne is Professor "de Classe Exceptionnelle" at the Ecole Centrale de Lille, France. Dumitru Popescu is Professor at the Faculty of Computers and Automatic Control of Bucharest, Romania. Professor Florin Gheorghe Filip, member of the Romanian Academy. Is the vice-president of the Romanian Academy and a senior researcher the National Computer Science Research and Development Institute, Bucharest, Romania. Dan Stefanoiu is Professor at "Politehnica" University of Bucharest, Romania.
Foreword ix
Preface xi
List of Acronyms xiii
Chapter 1. Linear Programming 1
1.1. Objective of linear programming 1
1.2. Stating the problem 1
1.3. Lagrange method 4
1.4. Simplex algorithm 5
1.4.1. Principle 5
1.4.2. Simplicial form formulation 5
1.4.3. Transition from one simplicial form to another 7
1.4.4. Summary of the simplex algorithm 9
1.5. Implementation example 11
1.6. Linear programming applied to the optimization of resource allocation
13
1.6.1. Areas of application 13
1.6.2. Resource allocation for advertising 13
1.6.3. Optimization of a cut of paper rolls 16
1.6.4. Structure of linear program of an optimal control problem 17
Chapter 2. Nonlinear Programming 23
2.1. Problem formulation 23
2.2. Karush-Kuhn-Tucker conditions 24
2.3. General search algorithm 26
2.3.1. Main steps 26
2.3.2. Computing the search direction 29
2.3.3. Computation of advancement step 33
2.4. Monovariable methods 33
2.4.1. Coggin's method (of polynomial interpolation) 34
2.4.2. Golden section method 36
2.5. Multivariable methods 39
2.5.1. Direct search methods 39
2.5.2. Gradient methods 57
Chapter 3. Dynamic Programming 101
3.1. Principle of dynamic programming 101
3.1.1. Stating the problem 101
3.1.2. Decision problem 101
3.2. Recurrence equation of optimality 102
3.3. Particular cases 104
3.3.1. Infinite horizon stationary problems 104
3.3.2. Variable horizon problem 104
3.3.3. Random horizon problem 104
3.3.4. Taking into account sum-like constraints 105
3.3.5. Random evolution law 106
3.3.6. Initialization when the final state is imposed 106
3.3.7. The case when the necessary information is not always available 107
3.4. Examples 107
3.4.1. Route optimization 107
3.4.2. The smuggler problem 109
Chapter 4. Hopfield Networks 115
4.1. Structure 115
4.2. Continuous dynamic Hopfield networks 117
4.2.1. General problem 117
4.2.2. Application to the traveling salesman problem 121
4.3. Optimization by Hopfield networks, based on simulated annealing 123
4.3.1. Deterministic method 123
4.3.2. Stochastic method 125
Chapter 5. Optimization in System Identification 131
5.1. The optimal identification principle 131
5.2. Formulation of optimal identification problems 132
5.2.1. General problem 132
5.2.2. Formulation based on optimization theory 133
5.2.3. Formulation based on estimation theory (statistics) 136
5.3. Usual identification models 138
5.3.1. General model 138
5.3.2. Rational input/output (RIO) models 140
5.3.3. Class of autoregressive models (ARMAX) 142
5.3.4. Class of state space representation models 145
5.4. Basic least squares method 146
5.4.1. LSM type solution 146
5.4.2. Geometric interpretation of the LSM solution 151
5.4.3. Consistency of the LSM type solution 154
5.4.4. Example of application of the LSM for an ARX model 157
5.5. Modified least squares methods 158
5.5.1. Recovering lost consistency 158
5.5.2. Extended LSM 162
5.5.3. Instrumental variables method 164
5.6. Minimum prediction error method 168
5.6.1. Basic principle and algorithm 168
5.6.2. Implementation of the MPEM for ARMAX models 171
5.6.3. Convergence and consistency of MPEM type estimations 174
5.7. Adaptive optimal identification methods 175
5.7.1. Accuracy/adaptability paradigm 175
5.7.2. Basic adaptive version of the LSM 177
5.7.3. Basic adaptive version of the IVM 182
5.7.4. Adaptive window versions of the LSM and IVM 183
Chapter 6. Optimization of Dynamic Systems 191
6.1. Variational methods 191
6.1.1. Variation of a functional 191
6.1.2. Constraint-free minimization 192
6.1.3. Hamilton canonical equations 194
6.1.4. Second-order conditions 195
6.1.5. Minimization with constraints 195
6.2. Application to the optimal command of a continuous process, maximum
principle 196
6.2.1. Formulation 196
6.2.2. Examples of implementation 198
6.3. Maximum principle, discrete case 206
6.4. Principle of optimal command based on quadratic criteria 207
6.5. Design of the LQ command 210
6.5.1. Finite horizon LQ command 210
6.5.2. The infinite horizon QL command 217
6.5.3. Robustness of the LQ command 221
6.6. Optimal filtering 224
6.6.1. Kalman-Bucy predictor 225
6.6.2. Kalman-Bucy filter 231
6.6.3. Stability of Kalman-Bucy estimators 234
6.6.4. Robustness of Kalman-Bucy estimators 235
6.7. Design of the LQG command 239
6.8. Optimization problems connected to quadratic linear criteria 245
6.8.1. Optimal control by state feedback 245
6.8.2. Quadratic stabilization 248
6.8.3. Optimal command based on output feedback 249
Chapter 7. Optimization of Large-Scale Systems 251
7.1. Characteristics of complex optimization problems 251
7.2. Decomposition techniques 252
7.2.1. Problems with block-diagonal structure 253
7.2.2. Problems with separable criteria and constraints 267
7.3. Penalization techniques 283
7.3.1. External penalization technique 284
7.3.2. Internal penalization technique 285
7.3.3. Extended penalization technique 286
Chapter 8. Optimization and Information Systems 289
8.1. Introduction 289
8.2. Factors influencing the construction of IT systems 290
8.3. Approaches 292
8.4. Selection of computing tools 296
8.5. Difficulties in implementation and use 297
8.6. Evaluation 297
8.7. Conclusions 298
Bibliography 299
Index 307
Preface xi
List of Acronyms xiii
Chapter 1. Linear Programming 1
1.1. Objective of linear programming 1
1.2. Stating the problem 1
1.3. Lagrange method 4
1.4. Simplex algorithm 5
1.4.1. Principle 5
1.4.2. Simplicial form formulation 5
1.4.3. Transition from one simplicial form to another 7
1.4.4. Summary of the simplex algorithm 9
1.5. Implementation example 11
1.6. Linear programming applied to the optimization of resource allocation
13
1.6.1. Areas of application 13
1.6.2. Resource allocation for advertising 13
1.6.3. Optimization of a cut of paper rolls 16
1.6.4. Structure of linear program of an optimal control problem 17
Chapter 2. Nonlinear Programming 23
2.1. Problem formulation 23
2.2. Karush-Kuhn-Tucker conditions 24
2.3. General search algorithm 26
2.3.1. Main steps 26
2.3.2. Computing the search direction 29
2.3.3. Computation of advancement step 33
2.4. Monovariable methods 33
2.4.1. Coggin's method (of polynomial interpolation) 34
2.4.2. Golden section method 36
2.5. Multivariable methods 39
2.5.1. Direct search methods 39
2.5.2. Gradient methods 57
Chapter 3. Dynamic Programming 101
3.1. Principle of dynamic programming 101
3.1.1. Stating the problem 101
3.1.2. Decision problem 101
3.2. Recurrence equation of optimality 102
3.3. Particular cases 104
3.3.1. Infinite horizon stationary problems 104
3.3.2. Variable horizon problem 104
3.3.3. Random horizon problem 104
3.3.4. Taking into account sum-like constraints 105
3.3.5. Random evolution law 106
3.3.6. Initialization when the final state is imposed 106
3.3.7. The case when the necessary information is not always available 107
3.4. Examples 107
3.4.1. Route optimization 107
3.4.2. The smuggler problem 109
Chapter 4. Hopfield Networks 115
4.1. Structure 115
4.2. Continuous dynamic Hopfield networks 117
4.2.1. General problem 117
4.2.2. Application to the traveling salesman problem 121
4.3. Optimization by Hopfield networks, based on simulated annealing 123
4.3.1. Deterministic method 123
4.3.2. Stochastic method 125
Chapter 5. Optimization in System Identification 131
5.1. The optimal identification principle 131
5.2. Formulation of optimal identification problems 132
5.2.1. General problem 132
5.2.2. Formulation based on optimization theory 133
5.2.3. Formulation based on estimation theory (statistics) 136
5.3. Usual identification models 138
5.3.1. General model 138
5.3.2. Rational input/output (RIO) models 140
5.3.3. Class of autoregressive models (ARMAX) 142
5.3.4. Class of state space representation models 145
5.4. Basic least squares method 146
5.4.1. LSM type solution 146
5.4.2. Geometric interpretation of the LSM solution 151
5.4.3. Consistency of the LSM type solution 154
5.4.4. Example of application of the LSM for an ARX model 157
5.5. Modified least squares methods 158
5.5.1. Recovering lost consistency 158
5.5.2. Extended LSM 162
5.5.3. Instrumental variables method 164
5.6. Minimum prediction error method 168
5.6.1. Basic principle and algorithm 168
5.6.2. Implementation of the MPEM for ARMAX models 171
5.6.3. Convergence and consistency of MPEM type estimations 174
5.7. Adaptive optimal identification methods 175
5.7.1. Accuracy/adaptability paradigm 175
5.7.2. Basic adaptive version of the LSM 177
5.7.3. Basic adaptive version of the IVM 182
5.7.4. Adaptive window versions of the LSM and IVM 183
Chapter 6. Optimization of Dynamic Systems 191
6.1. Variational methods 191
6.1.1. Variation of a functional 191
6.1.2. Constraint-free minimization 192
6.1.3. Hamilton canonical equations 194
6.1.4. Second-order conditions 195
6.1.5. Minimization with constraints 195
6.2. Application to the optimal command of a continuous process, maximum
principle 196
6.2.1. Formulation 196
6.2.2. Examples of implementation 198
6.3. Maximum principle, discrete case 206
6.4. Principle of optimal command based on quadratic criteria 207
6.5. Design of the LQ command 210
6.5.1. Finite horizon LQ command 210
6.5.2. The infinite horizon QL command 217
6.5.3. Robustness of the LQ command 221
6.6. Optimal filtering 224
6.6.1. Kalman-Bucy predictor 225
6.6.2. Kalman-Bucy filter 231
6.6.3. Stability of Kalman-Bucy estimators 234
6.6.4. Robustness of Kalman-Bucy estimators 235
6.7. Design of the LQG command 239
6.8. Optimization problems connected to quadratic linear criteria 245
6.8.1. Optimal control by state feedback 245
6.8.2. Quadratic stabilization 248
6.8.3. Optimal command based on output feedback 249
Chapter 7. Optimization of Large-Scale Systems 251
7.1. Characteristics of complex optimization problems 251
7.2. Decomposition techniques 252
7.2.1. Problems with block-diagonal structure 253
7.2.2. Problems with separable criteria and constraints 267
7.3. Penalization techniques 283
7.3.1. External penalization technique 284
7.3.2. Internal penalization technique 285
7.3.3. Extended penalization technique 286
Chapter 8. Optimization and Information Systems 289
8.1. Introduction 289
8.2. Factors influencing the construction of IT systems 290
8.3. Approaches 292
8.4. Selection of computing tools 296
8.5. Difficulties in implementation and use 297
8.6. Evaluation 297
8.7. Conclusions 298
Bibliography 299
Index 307
Foreword ix
Preface xi
List of Acronyms xiii
Chapter 1. Linear Programming 1
1.1. Objective of linear programming 1
1.2. Stating the problem 1
1.3. Lagrange method 4
1.4. Simplex algorithm 5
1.4.1. Principle 5
1.4.2. Simplicial form formulation 5
1.4.3. Transition from one simplicial form to another 7
1.4.4. Summary of the simplex algorithm 9
1.5. Implementation example 11
1.6. Linear programming applied to the optimization of resource allocation
13
1.6.1. Areas of application 13
1.6.2. Resource allocation for advertising 13
1.6.3. Optimization of a cut of paper rolls 16
1.6.4. Structure of linear program of an optimal control problem 17
Chapter 2. Nonlinear Programming 23
2.1. Problem formulation 23
2.2. Karush-Kuhn-Tucker conditions 24
2.3. General search algorithm 26
2.3.1. Main steps 26
2.3.2. Computing the search direction 29
2.3.3. Computation of advancement step 33
2.4. Monovariable methods 33
2.4.1. Coggin's method (of polynomial interpolation) 34
2.4.2. Golden section method 36
2.5. Multivariable methods 39
2.5.1. Direct search methods 39
2.5.2. Gradient methods 57
Chapter 3. Dynamic Programming 101
3.1. Principle of dynamic programming 101
3.1.1. Stating the problem 101
3.1.2. Decision problem 101
3.2. Recurrence equation of optimality 102
3.3. Particular cases 104
3.3.1. Infinite horizon stationary problems 104
3.3.2. Variable horizon problem 104
3.3.3. Random horizon problem 104
3.3.4. Taking into account sum-like constraints 105
3.3.5. Random evolution law 106
3.3.6. Initialization when the final state is imposed 106
3.3.7. The case when the necessary information is not always available 107
3.4. Examples 107
3.4.1. Route optimization 107
3.4.2. The smuggler problem 109
Chapter 4. Hopfield Networks 115
4.1. Structure 115
4.2. Continuous dynamic Hopfield networks 117
4.2.1. General problem 117
4.2.2. Application to the traveling salesman problem 121
4.3. Optimization by Hopfield networks, based on simulated annealing 123
4.3.1. Deterministic method 123
4.3.2. Stochastic method 125
Chapter 5. Optimization in System Identification 131
5.1. The optimal identification principle 131
5.2. Formulation of optimal identification problems 132
5.2.1. General problem 132
5.2.2. Formulation based on optimization theory 133
5.2.3. Formulation based on estimation theory (statistics) 136
5.3. Usual identification models 138
5.3.1. General model 138
5.3.2. Rational input/output (RIO) models 140
5.3.3. Class of autoregressive models (ARMAX) 142
5.3.4. Class of state space representation models 145
5.4. Basic least squares method 146
5.4.1. LSM type solution 146
5.4.2. Geometric interpretation of the LSM solution 151
5.4.3. Consistency of the LSM type solution 154
5.4.4. Example of application of the LSM for an ARX model 157
5.5. Modified least squares methods 158
5.5.1. Recovering lost consistency 158
5.5.2. Extended LSM 162
5.5.3. Instrumental variables method 164
5.6. Minimum prediction error method 168
5.6.1. Basic principle and algorithm 168
5.6.2. Implementation of the MPEM for ARMAX models 171
5.6.3. Convergence and consistency of MPEM type estimations 174
5.7. Adaptive optimal identification methods 175
5.7.1. Accuracy/adaptability paradigm 175
5.7.2. Basic adaptive version of the LSM 177
5.7.3. Basic adaptive version of the IVM 182
5.7.4. Adaptive window versions of the LSM and IVM 183
Chapter 6. Optimization of Dynamic Systems 191
6.1. Variational methods 191
6.1.1. Variation of a functional 191
6.1.2. Constraint-free minimization 192
6.1.3. Hamilton canonical equations 194
6.1.4. Second-order conditions 195
6.1.5. Minimization with constraints 195
6.2. Application to the optimal command of a continuous process, maximum
principle 196
6.2.1. Formulation 196
6.2.2. Examples of implementation 198
6.3. Maximum principle, discrete case 206
6.4. Principle of optimal command based on quadratic criteria 207
6.5. Design of the LQ command 210
6.5.1. Finite horizon LQ command 210
6.5.2. The infinite horizon QL command 217
6.5.3. Robustness of the LQ command 221
6.6. Optimal filtering 224
6.6.1. Kalman-Bucy predictor 225
6.6.2. Kalman-Bucy filter 231
6.6.3. Stability of Kalman-Bucy estimators 234
6.6.4. Robustness of Kalman-Bucy estimators 235
6.7. Design of the LQG command 239
6.8. Optimization problems connected to quadratic linear criteria 245
6.8.1. Optimal control by state feedback 245
6.8.2. Quadratic stabilization 248
6.8.3. Optimal command based on output feedback 249
Chapter 7. Optimization of Large-Scale Systems 251
7.1. Characteristics of complex optimization problems 251
7.2. Decomposition techniques 252
7.2.1. Problems with block-diagonal structure 253
7.2.2. Problems with separable criteria and constraints 267
7.3. Penalization techniques 283
7.3.1. External penalization technique 284
7.3.2. Internal penalization technique 285
7.3.3. Extended penalization technique 286
Chapter 8. Optimization and Information Systems 289
8.1. Introduction 289
8.2. Factors influencing the construction of IT systems 290
8.3. Approaches 292
8.4. Selection of computing tools 296
8.5. Difficulties in implementation and use 297
8.6. Evaluation 297
8.7. Conclusions 298
Bibliography 299
Index 307
Preface xi
List of Acronyms xiii
Chapter 1. Linear Programming 1
1.1. Objective of linear programming 1
1.2. Stating the problem 1
1.3. Lagrange method 4
1.4. Simplex algorithm 5
1.4.1. Principle 5
1.4.2. Simplicial form formulation 5
1.4.3. Transition from one simplicial form to another 7
1.4.4. Summary of the simplex algorithm 9
1.5. Implementation example 11
1.6. Linear programming applied to the optimization of resource allocation
13
1.6.1. Areas of application 13
1.6.2. Resource allocation for advertising 13
1.6.3. Optimization of a cut of paper rolls 16
1.6.4. Structure of linear program of an optimal control problem 17
Chapter 2. Nonlinear Programming 23
2.1. Problem formulation 23
2.2. Karush-Kuhn-Tucker conditions 24
2.3. General search algorithm 26
2.3.1. Main steps 26
2.3.2. Computing the search direction 29
2.3.3. Computation of advancement step 33
2.4. Monovariable methods 33
2.4.1. Coggin's method (of polynomial interpolation) 34
2.4.2. Golden section method 36
2.5. Multivariable methods 39
2.5.1. Direct search methods 39
2.5.2. Gradient methods 57
Chapter 3. Dynamic Programming 101
3.1. Principle of dynamic programming 101
3.1.1. Stating the problem 101
3.1.2. Decision problem 101
3.2. Recurrence equation of optimality 102
3.3. Particular cases 104
3.3.1. Infinite horizon stationary problems 104
3.3.2. Variable horizon problem 104
3.3.3. Random horizon problem 104
3.3.4. Taking into account sum-like constraints 105
3.3.5. Random evolution law 106
3.3.6. Initialization when the final state is imposed 106
3.3.7. The case when the necessary information is not always available 107
3.4. Examples 107
3.4.1. Route optimization 107
3.4.2. The smuggler problem 109
Chapter 4. Hopfield Networks 115
4.1. Structure 115
4.2. Continuous dynamic Hopfield networks 117
4.2.1. General problem 117
4.2.2. Application to the traveling salesman problem 121
4.3. Optimization by Hopfield networks, based on simulated annealing 123
4.3.1. Deterministic method 123
4.3.2. Stochastic method 125
Chapter 5. Optimization in System Identification 131
5.1. The optimal identification principle 131
5.2. Formulation of optimal identification problems 132
5.2.1. General problem 132
5.2.2. Formulation based on optimization theory 133
5.2.3. Formulation based on estimation theory (statistics) 136
5.3. Usual identification models 138
5.3.1. General model 138
5.3.2. Rational input/output (RIO) models 140
5.3.3. Class of autoregressive models (ARMAX) 142
5.3.4. Class of state space representation models 145
5.4. Basic least squares method 146
5.4.1. LSM type solution 146
5.4.2. Geometric interpretation of the LSM solution 151
5.4.3. Consistency of the LSM type solution 154
5.4.4. Example of application of the LSM for an ARX model 157
5.5. Modified least squares methods 158
5.5.1. Recovering lost consistency 158
5.5.2. Extended LSM 162
5.5.3. Instrumental variables method 164
5.6. Minimum prediction error method 168
5.6.1. Basic principle and algorithm 168
5.6.2. Implementation of the MPEM for ARMAX models 171
5.6.3. Convergence and consistency of MPEM type estimations 174
5.7. Adaptive optimal identification methods 175
5.7.1. Accuracy/adaptability paradigm 175
5.7.2. Basic adaptive version of the LSM 177
5.7.3. Basic adaptive version of the IVM 182
5.7.4. Adaptive window versions of the LSM and IVM 183
Chapter 6. Optimization of Dynamic Systems 191
6.1. Variational methods 191
6.1.1. Variation of a functional 191
6.1.2. Constraint-free minimization 192
6.1.3. Hamilton canonical equations 194
6.1.4. Second-order conditions 195
6.1.5. Minimization with constraints 195
6.2. Application to the optimal command of a continuous process, maximum
principle 196
6.2.1. Formulation 196
6.2.2. Examples of implementation 198
6.3. Maximum principle, discrete case 206
6.4. Principle of optimal command based on quadratic criteria 207
6.5. Design of the LQ command 210
6.5.1. Finite horizon LQ command 210
6.5.2. The infinite horizon QL command 217
6.5.3. Robustness of the LQ command 221
6.6. Optimal filtering 224
6.6.1. Kalman-Bucy predictor 225
6.6.2. Kalman-Bucy filter 231
6.6.3. Stability of Kalman-Bucy estimators 234
6.6.4. Robustness of Kalman-Bucy estimators 235
6.7. Design of the LQG command 239
6.8. Optimization problems connected to quadratic linear criteria 245
6.8.1. Optimal control by state feedback 245
6.8.2. Quadratic stabilization 248
6.8.3. Optimal command based on output feedback 249
Chapter 7. Optimization of Large-Scale Systems 251
7.1. Characteristics of complex optimization problems 251
7.2. Decomposition techniques 252
7.2.1. Problems with block-diagonal structure 253
7.2.2. Problems with separable criteria and constraints 267
7.3. Penalization techniques 283
7.3.1. External penalization technique 284
7.3.2. Internal penalization technique 285
7.3.3. Extended penalization technique 286
Chapter 8. Optimization and Information Systems 289
8.1. Introduction 289
8.2. Factors influencing the construction of IT systems 290
8.3. Approaches 292
8.4. Selection of computing tools 296
8.5. Difficulties in implementation and use 297
8.6. Evaluation 297
8.7. Conclusions 298
Bibliography 299
Index 307