Optimal Control

by ; ;
  • ISBN13:


  • ISBN10:


  • Edition: 3rd
  • Format: Hardcover
  • Copyright: 2012-02-01
  • Publisher: Wiley

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
  • We Buy This Book Back!
    In-Store Credit: $3.15
    Check/Direct Deposit: $3.00
List Price: $165.00 Save up to $66.00
  • Rent Book $99.00
    Add to Cart Free Shipping


Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
  • The Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.


This new, updated edition of Optimal Control reflects major changes that have occurred in the field in recent years and presents, in a clear and direct way, the fundamentals of optimal control theory. It covers the major topics involving measurement, principles of optimality, dynamic programming, variational methods, Kalman filtering, and other solution techniques. To give the reader a sense of the problems that can arise in a hands-on project, the authors have updated material on optimal output feedback control, a technique used in the aerospace industry. Also included is a new chapter on Reinforcement Learning for adaptive optimal control. Relations to classical control theory are emphasized throughout the text, and a root-locus approach to steady-state controller design is included. A chapter on optimal control of polynomial systems is designed to give the reader sufficient background for further study in the field of adaptive control.

Author Biography

Frank L. Lewis is the Moncrief-O'Donnell Professor and Head of the Advanced Controls, Sensors, and MEMS Group in the Automation and Robotics Research Institute of the University of Texas at Arlington. Dr. Lewis is also a Fellow of the IEEE. Draguna L. Vrabie is Graduate Research Assistant in Electrical Engineering at the University of Texas at Arlington, specializing in approximate dynamic programming for continuous state and action spaces, optimal control, adaptive control, model predictive control, and general theory of nonlinear systems. Vassilis L. Syrmos is a Professor in the Department of Electrical Engineering and the Associate Vice Chancellor for Research and Graduate Education at the University of Hawaii at Manoa.

Table of Contents

Prefacep. xi
Static Optimizationp. 1
Optimization without Constraintsp. 1
Optimization with Equality Constraintsp. 4
Numerical Solution Methodsp. 15
Problemsp. 15
Optimal Control of Discrete-Time Systemsp. 19
Solution of the General Discrete-Time Optimization Problemp. 19
Discrete-Time Linear Quadratic Regulatorp. 32
Digital Control of Continuous-Time Systemsp. 53
Steady-State Closed-Loop Control and Suboptimal Feedbackp. 65
Frequency-Domain Resultsp. 96
Problemsp. 102
Optimal Control of Continuous-Time Systemsp. 110
The Calculus of Variationsp. 110
Solution of the General Continuous-Time Optimization Problemp. 112
Continuous-Time Linear Quadratic Regulatorp. 135
Steady-State Closed-Loop Control and Suboptimal Feedbackp. 154
Frequency-Domain Resultsp. 164
Problemsp. 167
The Tracking Problem and Other LQR Extensionsp. 177
The Tracking Problemp. 177
Regulator with Function of Final State Fixedp. 183
Second-Order Variations in the Performance Indexp. 185
The Discrete-Time Tracking Problemp. 190
Discrete Regulator with Function of Final State Fixedp. 199
Discrete Second-Order Variations in the Performance Indexp. 206
Problemsp. 211
Final-Time-Free and Constrained Input Controlp. 213
Final-Time-Free Problemsp. 213
Constrained Input Problemsp. 232
Problemsp. 257
Dynamic Programmingp. 260
Bellman's Principle of Optimalityp. 260
Discrete-Time Systemsp. 263
Continuous-Time Systemsp. 271
Problemsp. 283
Optimal Control for Polynomial Systemsp. 287
Discrete Linear Quadratic Regulatorp. 287
Digital Control of Continuous-Time Systemsp. 292
Problemsp. 295
Output Feedback and Structured Controlp. 297
Linear Quadratic Regulator with Output Feedbackp. 297
Tracking a Reference Inputp. 313
Tracking by Regulator Redesignp. 327
Command-Generator Trackerp. 331
Explicit Model-Following Designp. 338
Output Feedback in Game Theory and Decentralized Controlp. 343
Problemsp. 351
Robustness and Multivariable Frequency-Domain Techniquesp. 355
Introductionp. 355
Multivariable Frequency-Domain Analysisp. 357
Robust Output-Feedback Designp. 380
Observers and the Kalman Filterp. 383
LQG/Loop-Transfer Recoveryp. 408
H∞ DESIGNp. 430
Problemsp. 435
Differential Gamesp. 438
Optimal Control Derived Using Pontryagin's Minimum Principle and the Bellman Equationp. 439
Two-player Zero-sum Gamesp. 444
Application of Zero-sum Games to H∞ Controlp. 450
Multiplayer Non-zero-sum Gamesp. 453
Reinforcement Learning and Optimal Adaptive Controlp. 461
Reinforcement Learningp. 462
Markov Decision Processesp. 464
Policy Evaluation and Policy Improvementp. 474
Temporal Difference Learning and Optimal Adaptive Controlp. 489
Optimal Adaptive Control for Discrete-time Systemsp. 490
Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-time Systemsp. 503
Synchronous Optimal Adaptive Control for Continuous-time Systemsp. 513
Review of Matrix Algebrap. 518
Basic Definitions and Factsp. 518
Partitioned Matricesp. 519
Quadratic Forms and Definitenessp. 521
Matrix Calculusp. 523
The Generalized Eigenvalue Problemp. 525
Referencesp. 527
Indexp. 535
Table of Contents provided by Ingram. All Rights Reserved.

Rewards Program

Write a Review