(0) items

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.
Optimal Control,9780470633496
This item qualifies for

Your order must be $59 or more, you must select US Postal Service Shipping as your shipping preference, and the "Group my items into as few shipments as possible" option when you place your order.

Bulk sales, PO's, Marketplace Items, eBooks, Apparel, and DVDs not included.

Optimal Control



by ; ;
Pub. Date:
List Price: $165.00

Rent Textbook


Buy New Textbook

Usually Ships in 3-4 Business Days



Used Textbook

We're Sorry
Sold Out

More New and Used
from Private Sellers
Starting at $85.84

Questions About This Book?

Why should I rent this book?

Renting is easy, fast, and cheap! Renting from can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book back to us with a free UPS shipping label! No need to worry about selling it back.

How do rental returns work?

Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!

What version or edition is this?

This is the 3rd edition with a publication date of 2/1/2012.

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
  • The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
  • The eBook copy of this book is not guaranteed to include any supplemental materials. Typically only the book itself is included.


This new, updated edition of Optimal Control reflects major changes that have occurred in the field in recent years and presents, in a clear and direct way, the fundamentals of optimal control theory. It covers the major topics involving measurement, principles of optimality, dynamic programming, variational methods, Kalman filtering, and other solution techniques. To give the reader a sense of the problems that can arise in a hands-on project, the authors have updated material on optimal output feedback control, a technique used in the aerospace industry. Also included is a new chapter on Reinforcement Learning for adaptive optimal control. Relations to classical control theory are emphasized throughout the text, and a root-locus approach to steady-state controller design is included. A chapter on optimal control of polynomial systems is designed to give the reader sufficient background for further study in the field of adaptive control.

Author Biography

Frank L. Lewis is the Moncrief-O'Donnell Professor and Head of the Advanced Controls, Sensors, and MEMS Group in the Automation and Robotics Research Institute of the University of Texas at Arlington. Dr. Lewis is also a Fellow of the IEEE. Draguna L. Vrabie is Graduate Research Assistant in Electrical Engineering at the University of Texas at Arlington, specializing in approximate dynamic programming for continuous state and action spaces, optimal control, adaptive control, model predictive control, and general theory of nonlinear systems. Vassilis L. Syrmos is a Professor in the Department of Electrical Engineering and the Associate Vice Chancellor for Research and Graduate Education at the University of Hawaii at Manoa.

Table of Contents

Prefacep. xi
Static Optimizationp. 1
Optimization without Constraintsp. 1
Optimization with Equality Constraintsp. 4
Numerical Solution Methodsp. 15
Problemsp. 15
Optimal Control of Discrete-Time Systemsp. 19
Solution of the General Discrete-Time Optimization Problemp. 19
Discrete-Time Linear Quadratic Regulatorp. 32
Digital Control of Continuous-Time Systemsp. 53
Steady-State Closed-Loop Control and Suboptimal Feedbackp. 65
Frequency-Domain Resultsp. 96
Problemsp. 102
Optimal Control of Continuous-Time Systemsp. 110
The Calculus of Variationsp. 110
Solution of the General Continuous-Time Optimization Problemp. 112
Continuous-Time Linear Quadratic Regulatorp. 135
Steady-State Closed-Loop Control and Suboptimal Feedbackp. 154
Frequency-Domain Resultsp. 164
Problemsp. 167
The Tracking Problem and Other LQR Extensionsp. 177
The Tracking Problemp. 177
Regulator with Function of Final State Fixedp. 183
Second-Order Variations in the Performance Indexp. 185
The Discrete-Time Tracking Problemp. 190
Discrete Regulator with Function of Final State Fixedp. 199
Discrete Second-Order Variations in the Performance Indexp. 206
Problemsp. 211
Final-Time-Free and Constrained Input Controlp. 213
Final-Time-Free Problemsp. 213
Constrained Input Problemsp. 232
Problemsp. 257
Dynamic Programmingp. 260
Bellman's Principle of Optimalityp. 260
Discrete-Time Systemsp. 263
Continuous-Time Systemsp. 271
Problemsp. 283
Optimal Control for Polynomial Systemsp. 287
Discrete Linear Quadratic Regulatorp. 287
Digital Control of Continuous-Time Systemsp. 292
Problemsp. 295
Output Feedback and Structured Controlp. 297
Linear Quadratic Regulator with Output Feedbackp. 297
Tracking a Reference Inputp. 313
Tracking by Regulator Redesignp. 327
Command-Generator Trackerp. 331
Explicit Model-Following Designp. 338
Output Feedback in Game Theory and Decentralized Controlp. 343
Problemsp. 351
Robustness and Multivariable Frequency-Domain Techniquesp. 355
Introductionp. 355
Multivariable Frequency-Domain Analysisp. 357
Robust Output-Feedback Designp. 380
Observers and the Kalman Filterp. 383
LQG/Loop-Transfer Recoveryp. 408
H∞ DESIGNp. 430
Problemsp. 435
Differential Gamesp. 438
Optimal Control Derived Using Pontryagin's Minimum Principle and the Bellman Equationp. 439
Two-player Zero-sum Gamesp. 444
Application of Zero-sum Games to H∞ Controlp. 450
Multiplayer Non-zero-sum Gamesp. 453
Reinforcement Learning and Optimal Adaptive Controlp. 461
Reinforcement Learningp. 462
Markov Decision Processesp. 464
Policy Evaluation and Policy Improvementp. 474
Temporal Difference Learning and Optimal Adaptive Controlp. 489
Optimal Adaptive Control for Discrete-time Systemsp. 490
Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-time Systemsp. 503
Synchronous Optimal Adaptive Control for Continuous-time Systemsp. 513
Review of Matrix Algebrap. 518
Basic Definitions and Factsp. 518
Partitioned Matricesp. 519
Quadratic Forms and Definitenessp. 521
Matrix Calculusp. 523
The Generalized Eigenvalue Problemp. 525
Referencesp. 527
Indexp. 535
Table of Contents provided by Ingram. All Rights Reserved.

Please wait while the item is added to your cart...