- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state…mehr
Andere Kunden interessierten sich auch für
- Xungjing LiOptimal Control Theory for Infinite Dimensional Systems183,99 €
- Peter FalbMethods of Algebraic Geometry in Control Theory: Part II83,99 €
- Geir E. DullerudControl of Uncertain Sampled-Data Systems42,99 €
- Harold J. KushnerWeak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems42,99 €
- Marc A. PetersMinimum Entropy Control for Time-Varying Systems122,99 €
- Peter FalbMethods of Algebraic Geometry in Control Theory: Part II74,99 €
- Marc A. PetersMinimum Entropy Control for Time-Varying Systems74,99 €
-
-
-
Infinite dimensional systems can be used to describe many phenomena in the real world. As is well known, heat conduction, properties of elastic plastic material, fluid dynamics, diffusion-reaction processes, etc., all lie within this area. The object that we are studying (temperature, displace ment, concentration, velocity, etc.) is usually referred to as the state. We are interested in the case where the state satisfies proper differential equa tions that are derived from certain physical laws, such as Newton's law, Fourier's law etc. The space in which the state exists is called the state space, and the equation that the state satisfies is called the state equation. By an infinite dimensional system we mean one whose corresponding state space is infinite dimensional. In particular, we are interested in the case where the state equation is one of the following types: partial differential equation, functional differential equation, integro-differential equation, or abstract evolution equation. The case in which the state equation is being a stochastic differential equation is also an infinite dimensional problem, but we will not discuss such a case in this book.
Produktdetails
- Produktdetails
- Systems & Control: Foundations & Applications
- Verlag: Birkhäuser / Birkhäuser Boston / Springer, Basel
- Artikelnr. des Verlages: 978-0-8176-3722-4
- 1994.
- Seitenzahl: 468
- Erscheinungstermin: 22. Dezember 1994
- Englisch
- Abmessung: 241mm x 160mm x 31mm
- Gewicht: 830g
- ISBN-13: 9780817637224
- ISBN-10: 0817637222
- Artikelnr.: 21391295
- Systems & Control: Foundations & Applications
- Verlag: Birkhäuser / Birkhäuser Boston / Springer, Basel
- Artikelnr. des Verlages: 978-0-8176-3722-4
- 1994.
- Seitenzahl: 468
- Erscheinungstermin: 22. Dezember 1994
- Englisch
- Abmessung: 241mm x 160mm x 31mm
- Gewicht: 830g
- ISBN-13: 9780817637224
- ISBN-10: 0817637222
- Artikelnr.: 21391295
1. Control Problems in Infinite Dimensions.- 1. Diffusion Problems.- 2. Vibration Problems.- 3. Population Dynamics.- 4. Fluid Dynamics.- 5. Free Boundary Problems.- Remarks.- 2. Mathematical Preliminaries.- 1. Elements in Functional Analysis.- 1.1. Spaces.- 1.2. Linear operators.- 1.3. Linear functional and dual spaces.- 1.4. Adjoint operators.- 1.5. Spectral theory.- 1.6. Compact operators.- 2. Some Geometric Aspects of Banach Spaces.- 2.1. Convex sets.- 2.2. Convexity of Banach spaces.- 3. Banach Space Valued Functions.- 3.1. Measurability and integrability.- 3.2. Continuity and differentiability.- 4. Theory of Co Semigroups.- 4.1. Unbounded operators.- 4.2. Co semigroups.- 4.3. Special types of Co semigroups.- 4.4. Examples.- 5. Evolution Equations.- 5.1. Solutions.- 5.2. Semilinear equations.- 5.3. Variation of constants formula.- 6. Elliptic Partial Differential Equations.- 6.1. Sobolev spaces.- 6.2. Linear elliptic equations.- 6.3. Semilinear elliptic equations.- Remarks.- 3. Existence Theory of Optimal Controls.- 1. Souslin Space.- 1.1. Polish space.- 1.2. Souslin space.- 1.3. Capacity and capacitability.- 2. Multifunctions and Selection Theorems.- 2.1. Continuity.- 2.2. Measurability.- 2.3. Measurable selection theorems.- 3. Evolution Systems with Compact Semigroups.- 4. Existence of Feasible Pairs and Optimal Pairs.- 4.1. Cesari property.- 4.2. Existence theorems.- 5. Second Order Evolution Systems.- 5.1. Formulation of the problem.- 5.2. Existence of optimal controls.- 6. Elliptic Partial Differential Equations and Variational Inequalities.- Remarks.- 4. Necessary Conditions for Optimal Controls - Abstract Evolution Equations.- 1. Formulation of the Problem.- 2. Ekeland Variational Principle.- 3. Other Preliminary Results.- 3.1. Finite codimensionality.- 3.2. Preliminaries for spike perturbation.- 3.3. The distance function.- 4. Proof of the Maximum Principle.- 5. Applications.- Remarks.- 5. Necessary Conditions for Optimal Controls - Elliptic Partial Differential Equations.- 1. Semilinear Elliptic Equations.- 1.1. Optimal control problem and the maximum principle.- 1.2. The state coastraints.- 2. Variation along Feasible Pairs.- 3. Proof of the Maximum Principle.- 4. Variational Inequalities.- 4.1. Stability of the optimal cost.- 4.2. Approximate control problems.- 4.3. Maximum principle and its proof.- 5. Quasilinear Equations.- 5.1. The state equation and the optimal control problem.- 5.2. The maximum principle.- 6. Minimax Control Problem.- 6.1. Statement of the problem.- 6.2. Regularization of the cost functional.- 6.3. Necessary conditions for optimal controls.- 7. Bounary Control Problems.- 7.1. Formulation of the problem.- 7.2. Strong stability and the qualified maximum principle.- 7.3. Neumann problem with measure data.- 7.4. Exact penalization and a proof of the maximum principle.- Remarks.- 6. Dynamic Programming Method for Evolution Systems.- 1. Optimality Principle and Hamilton-Jacobi-Bellman Equations.- 2. Properties of the Value Functions.- 2.1. Continuity.- 2.2. B-continuity.- 2.3. Semi-concavity.- 3. Viscosity Solutions.- 4. Uniqueness of Viscosity Solutions.- 4.1. A perturbed optimization lemma.- 4.2. The Hilbert space X?.- 4.3. A uniqueness theorem.- 5. Relation to Maximum Principle and Optimal Synthesis.- 6. Infinite Horizon Problems.- Remarks.- 7. Controllability and Time Optimal Control.- 1. Definitions ofControllability.- 2. Controllability for linear systems.- 2.1. Approximate controllability.- 2.2. Exact controllability.- 3. Approximate controllability for semilinear systems.- 4. Time Optimal Control - Semilinear Systems.- 4.1. Necessary conditions for time optimal pairs.- 4.2. The minimum time function.- 5. Time Optimal Control - Linear Systems.- 5.1. Convexity of the reachable set.- 5.2. Encounter of moving sets.- 5.3. Time optimal control.- Remarks.- 8. Optimal Switching and Impulse Controls.- 1. Switching and Impulse Controls.- 2. Preliminary Results.- 3. Properties of the Value Function.- 4. Optimality Principle and the HJB Equation.- 5. Construction of an Optimal Control.- 6. Approximation of the Control Problem.- 7. Viscosity Solutions.- 8. Problem in Finite Horizon.- Remarks.- 9. Linear Quadratic Optimal Control Problems.- 1. Formulation of the Problem.- 1.1. Examples of unbounded control problems.- 1.2. The LQ problem.- 2. Well-posedness and Solvability.- 3. State Feedback Control.- 3.1. Two-point boundary value problem.- 3.2. The Problem (LQ)t.- 3.3. A Fredholm integral equation.- 3.4. State feedback representation of optimal controls.- 4. Riccati Integral Equation.- 5. Problem in Infinite Horizon.- 5.1. Reduction of the problem.- 5.2. Well-posedness and solvability.- 5.3. Algebraic Riccati equation.- 5.4. The positive real lemma.- 5.5. Feedback stabilization.- 5.6. Fredholm integral equation and Riccati integral equation.- Remarks.- References.
1. Control Problems in Infinite Dimensions.- 1. Diffusion Problems.- 2. Vibration Problems.- 3. Population Dynamics.- 4. Fluid Dynamics.- 5. Free Boundary Problems.- Remarks.- 2. Mathematical Preliminaries.- 1. Elements in Functional Analysis.- 1.1. Spaces.- 1.2. Linear operators.- 1.3. Linear functional and dual spaces.- 1.4. Adjoint operators.- 1.5. Spectral theory.- 1.6. Compact operators.- 2. Some Geometric Aspects of Banach Spaces.- 2.1. Convex sets.- 2.2. Convexity of Banach spaces.- 3. Banach Space Valued Functions.- 3.1. Measurability and integrability.- 3.2. Continuity and differentiability.- 4. Theory of Co Semigroups.- 4.1. Unbounded operators.- 4.2. Co semigroups.- 4.3. Special types of Co semigroups.- 4.4. Examples.- 5. Evolution Equations.- 5.1. Solutions.- 5.2. Semilinear equations.- 5.3. Variation of constants formula.- 6. Elliptic Partial Differential Equations.- 6.1. Sobolev spaces.- 6.2. Linear elliptic equations.- 6.3. Semilinear elliptic equations.- Remarks.- 3. Existence Theory of Optimal Controls.- 1. Souslin Space.- 1.1. Polish space.- 1.2. Souslin space.- 1.3. Capacity and capacitability.- 2. Multifunctions and Selection Theorems.- 2.1. Continuity.- 2.2. Measurability.- 2.3. Measurable selection theorems.- 3. Evolution Systems with Compact Semigroups.- 4. Existence of Feasible Pairs and Optimal Pairs.- 4.1. Cesari property.- 4.2. Existence theorems.- 5. Second Order Evolution Systems.- 5.1. Formulation of the problem.- 5.2. Existence of optimal controls.- 6. Elliptic Partial Differential Equations and Variational Inequalities.- Remarks.- 4. Necessary Conditions for Optimal Controls - Abstract Evolution Equations.- 1. Formulation of the Problem.- 2. Ekeland Variational Principle.- 3. Other Preliminary Results.- 3.1. Finite codimensionality.- 3.2. Preliminaries for spike perturbation.- 3.3. The distance function.- 4. Proof of the Maximum Principle.- 5. Applications.- Remarks.- 5. Necessary Conditions for Optimal Controls - Elliptic Partial Differential Equations.- 1. Semilinear Elliptic Equations.- 1.1. Optimal control problem and the maximum principle.- 1.2. The state coastraints.- 2. Variation along Feasible Pairs.- 3. Proof of the Maximum Principle.- 4. Variational Inequalities.- 4.1. Stability of the optimal cost.- 4.2. Approximate control problems.- 4.3. Maximum principle and its proof.- 5. Quasilinear Equations.- 5.1. The state equation and the optimal control problem.- 5.2. The maximum principle.- 6. Minimax Control Problem.- 6.1. Statement of the problem.- 6.2. Regularization of the cost functional.- 6.3. Necessary conditions for optimal controls.- 7. Bounary Control Problems.- 7.1. Formulation of the problem.- 7.2. Strong stability and the qualified maximum principle.- 7.3. Neumann problem with measure data.- 7.4. Exact penalization and a proof of the maximum principle.- Remarks.- 6. Dynamic Programming Method for Evolution Systems.- 1. Optimality Principle and Hamilton-Jacobi-Bellman Equations.- 2. Properties of the Value Functions.- 2.1. Continuity.- 2.2. B-continuity.- 2.3. Semi-concavity.- 3. Viscosity Solutions.- 4. Uniqueness of Viscosity Solutions.- 4.1. A perturbed optimization lemma.- 4.2. The Hilbert space X?.- 4.3. A uniqueness theorem.- 5. Relation to Maximum Principle and Optimal Synthesis.- 6. Infinite Horizon Problems.- Remarks.- 7. Controllability and Time Optimal Control.- 1. Definitions ofControllability.- 2. Controllability for linear systems.- 2.1. Approximate controllability.- 2.2. Exact controllability.- 3. Approximate controllability for semilinear systems.- 4. Time Optimal Control - Semilinear Systems.- 4.1. Necessary conditions for time optimal pairs.- 4.2. The minimum time function.- 5. Time Optimal Control - Linear Systems.- 5.1. Convexity of the reachable set.- 5.2. Encounter of moving sets.- 5.3. Time optimal control.- Remarks.- 8. Optimal Switching and Impulse Controls.- 1. Switching and Impulse Controls.- 2. Preliminary Results.- 3. Properties of the Value Function.- 4. Optimality Principle and the HJB Equation.- 5. Construction of an Optimal Control.- 6. Approximation of the Control Problem.- 7. Viscosity Solutions.- 8. Problem in Finite Horizon.- Remarks.- 9. Linear Quadratic Optimal Control Problems.- 1. Formulation of the Problem.- 1.1. Examples of unbounded control problems.- 1.2. The LQ problem.- 2. Well-posedness and Solvability.- 3. State Feedback Control.- 3.1. Two-point boundary value problem.- 3.2. The Problem (LQ)t.- 3.3. A Fredholm integral equation.- 3.4. State feedback representation of optimal controls.- 4. Riccati Integral Equation.- 5. Problem in Infinite Horizon.- 5.1. Reduction of the problem.- 5.2. Well-posedness and solvability.- 5.3. Algebraic Riccati equation.- 5.4. The positive real lemma.- 5.5. Feedback stabilization.- 5.6. Fredholm integral equation and Riccati integral equation.- Remarks.- References.