A NEW EDITION OF THE CLASSIC TEXT ON OPTIMAL CONTROL THEORY As a superb introductory text and an indispensable reference, this new edition of Optimal Control will serve the needs of both the professional engineer and the advanced student in mechanical, electrical, and aerospace engineering. Its coverage encompasses all the fundamental topics as well as the major changes that have occurred in recent years. An abundance of computer simulations using MATLAB and relevant Toolboxes is included to give the reader the actual experience of applying the theory to real-world situations. Major topics covered include: Static OptimizationOptimal Control of Discrete-Time SystemsOptimal Control of Continuous-Time SystemsThe Tracking Problem and Other LQR ExtensionsFinal-Time-Free and Constrained Input ControlDynamic ProgrammingOptimal Control for Polynomial SystemsOutput Feedback and Structured ControlRobustness and Multivariable Frequency-Domain TechniquesDifferential GamesReinforcement Learning and Optimal Adaptive Control
Les mer
This new, updated edition reflects major changes that have occurred in the field in recent years and presents, in a clear and direct way, the fundamentals of optimal control theory. It covers the major topics involving measurement, principles of optimality, dynamic programming, variational methods, Kalman filtering, and other solution techniques.
Les mer
PREFACE xi 1 STATIC OPTIMIZATION 1 1.1 Optimization without Constraints / 1 1.2 Optimization with Equality Constraints / 4 1.3 Numerical Solution Methods / 15 Problems / 15 2 OPTIMAL CONTROL OF DISCRETE-TIME SYSTEMS 19 2.1 Solution of the General Discrete-Time Optimization Problem / 19 2.2 Discrete-Time Linear Quadratic Regulator / 32 2.3 Digital Control of Continuous-Time Systems / 53 2.4 Steady-State Closed-Loop Control and Suboptimal Feedback / 65 2.5 Frequency-Domain Results / 96 Problems / 102 3 OPTIMAL CONTROL OF CONTINUOUS-TIME SYSTEMS 110 3.1 The Calculus of Variations / 110 3.2 Solution of the General Continuous-Time Optimization Problem / 112 3.3 Continuous-Time Linear Quadratic Regulator / 135 3.4 Steady-State Closed-Loop Control and Suboptimal Feedback / 154 3.5 Frequency-Domain Results / 164 Problems / 167 4 THE TRACKING PROBLEM AND OTHER LQR EXTENSIONS 177 4.1 The Tracking Problem / 177 4.2 Regulator with Function of Final State Fixed / 183 4.3 Second-Order Variations in the Performance Index / 185 4.4 The Discrete-Time Tracking Problem / 190 4.5 Discrete Regulator with Function of Final State Fixed / 199 4.6 Discrete Second-Order Variations in the Performance Index / 206 Problems / 211 5 FINAL-TIME-FREE AND CONSTRAINED INPUT CONTROL 213 5.1 Final-Time-Free Problems / 213 5.2 Constrained Input Problems / 232 Problems / 257 6 DYNAMIC PROGRAMMING 260 6.1 Bellman’s Principle of Optimality / 260 6.2 Discrete-Time Systems / 263 6.3 Continuous-Time Systems / 271 Problems / 283 7 OPTIMAL CONTROL FOR POLYNOMIAL SYSTEMS 287 7.1 Discrete Linear Quadratic Regulator / 287 7.2 Digital Control of Continuous-Time Systems / 292 Problems / 295 8 OUTPUT FEEDBACK AND STRUCTURED CONTROL 297 8.1 Linear Quadratic Regulator with Output Feedback / 297 8.2 Tracking a Reference Input / 313 8.3 Tracking by Regulator Redesign / 327 8.4 Command-Generator Tracker / 331 8.5 Explicit Model-Following Design / 338 8.6 Output Feedback in Game Theory and Decentralized Control / 343 Problems / 351 9 ROBUSTNESS AND MULTIVARIABLE FREQUENCY-DOMAIN TECHNIQUES 355 9.1 Introduction / 355 9.2 Multivariable Frequency-Domain Analysis / 357 9.3 Robust Output-Feedback Design / 380 9.4 Observers and the Kalman Filter / 383 9.5 LQG/Loop-Transfer Recovery / 408 9.6 H∞ DESIGN / 430 Problems / 435 10 DIFFERENTIAL GAMES 438 10.1 Optimal Control Derived Using Pontryagin’s Minimum Principle and the Bellman Equation / 439 10.2 Two-player Zero-sum Games / 444 10.3 Application of Zero-sum Games to H∞ Control / 450 10.4 Multiplayer Non-zero-sum Games / 453 11 REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL 461 11.1 Reinforcement Learning / 462 11.2 Markov Decision Processes / 464 11.3 Policy Evaluation and Policy Improvement / 474 11.4 Temporal Difference Learning and Optimal Adaptive Control / 489 11.5 Optimal Adaptive Control for Discrete-time Systems / 490 11.6 Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-time Systems / 503 11.7 Synchronous Optimal Adaptive Control for Continuous-time Systems / 513 APPENDIX A REVIEW OF MATRIX ALGEBRA 518 A.1 Basic Definitions and Facts / 518 A.2 Partitioned Matrices / 519 A.3 Quadratic Forms and Definiteness / 521 A.4 Matrix Calculus / 523 A.5 The Generalized Eigenvalue Problem / 525 REFERENCES 527 INDEX 535
Les mer
A NEW EDITION OF THE CLASSIC TEXT ON OPTIMAL CONTROL THEORY As a superb introductory text and an indispensable reference, this new edition of Optimal Control will serve the needs of both the professional engineer and the advanced student in mechanical, electrical, and aerospace engineering. Its coverage encompasses all the fundamental topics as well as the major changes that have occurred in recent years. An abundance of computer simulations using MATLAB and relevant Toolboxes is included to give the reader the actual experience of applying the theory to real-world situations. Major topics covered include: Static OptimizationOptimal Control of Discrete-Time SystemsOptimal Control of Continuous-Time SystemsThe Tracking Problem and Other LQR ExtensionsFinal-Time-Free and Constrained Input ControlDynamic ProgrammingOptimal Control for Polynomial SystemsOutput Feedback and Structured ControlRobustness and Multivariable Frequency-Domain TechniquesDifferential GamesReinforcement Learning and Optimal Adaptive Control
Les mer
Produktdetaljer
ISBN
9780470633496
Publisert
2012-02-20
Utgave
3. utgave
Utgiver
Vendor
John Wiley & Sons Inc
Vekt
885 gr
Høyde
236 mm
Bredde
163 mm
Dybde
36 mm
Aldersnivå
P, 06
Språk
Product language
Engelsk
Format
Product format
Innbundet
Antall sider
552
Om bidragsyterne
FRANK L. LEWIS is the Moncrief-O'Donnell Professor and Head of the Advanced Controls, Sensors, and MEMS Group in the Automation and Robotics Research Institute of the University of Texas at Arlington. Dr. Lewis is also a Fellow of the IEEE.
DRAGUNA L. VRABIE is Graduate Research Assistant in Electrical Engineering at the University of Texas at Arlington, specializing in approximate dynamic programming for continuous state and action spaces, optimal control, adaptive control, model predictive control, and general theory of nonlinear systems.
VASSILIS L. SYRMOS is a Professor in the Department of Electrical Engineering and the Associate Vice Chancellor for Research and Graduate Education at the University of Hawaii at Manoa.