One of the main problems in control theory is the stabilization problem consisting of finding a feedback control law ensuring stability; when the linear approximation is considered, the nat ural problem is stabilization of a linear system by linear state feedback or by using a linear dynamic controller. This prob lem was intensively studied during the last decades and many important results have been obtained. The present monograph is based mainly on results obtained by the authors. It focuses on stabilization of systems with slow and fast motions, on stabilization procedures that use only poor information about the system (high-gain stabilization and adaptive stabilization), and also on discrete time implementa tion of the stabilizing procedures. These topics are important in many applications of stabilization theory. We hope that this monograph may illustrate the way in which mathematical theories do influence advanced technol ogy. This book is not intended to be a text book nor a guide for control-designers. In engineering practice, control-design is a very complex task in which stability is only one of the re quirements and many aspects and facets of the problem have to be taken into consideration. Even if we restrict ourselves to stabilization, the book does not provide just recipes, but it fo cuses more on the ideas lying behind the recipes. In short, this is not a book on control, but on some mathematics of control.
"This book is a very clear and comprehensive exposition of several aspects of linear control theory connected with the stabilizability problems. The first chapter introduces the basic notions of stability and the problem of stabilization by means of feedback. The interest in the linear case is motivated by the theorem of stability for the first approximation. Chapter 2 is devoted to finite-dimensional, time-continuous, time-invariant linear systems. First, the authors discuss some classical concepts, like controllability, stabilizability, observability, detectability and their relationship. Moreover, optimality and stabilization are related by means of an interesting version of the Kalman-Lurie-Yakubovich-Popov equation. Finally, the authors consider state estimators and stabilization with disturbance attenuation. A similar theory is developed in Chapter 3, for systems with two time scales (singularly perturbed systems) that is systems with fast and slow components. Chapter 4 deals with high-gain stabilization of minimum phase systems, while Chapter 5 is concerned with adaptive stabilization and identification. In the final chapter, the authors study stabilization of systems where the feedback is implemented by means of a sampling technique (digital control)." --Zentralblatt Math