Book contents
- Frontmatter
- Contents
- Preface
- 1 Static optimization
- 2 Ordinary differential equations
- 3 Introduction to dynamic optimization
- 4 The maximum principle
- 5 The calculus of variations and dynamic programming
- 6 The general constrained control problem
- 7 Endpoint constraints and transversality conditions
- 8 Discontinuities in the optimal controls
- 9 Infinite-horizon problems
- 10 Three special topics
- Bibliography
- Index
4 - The maximum principle
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- 1 Static optimization
- 2 Ordinary differential equations
- 3 Introduction to dynamic optimization
- 4 The maximum principle
- 5 The calculus of variations and dynamic programming
- 6 The general constrained control problem
- 7 Endpoint constraints and transversality conditions
- 8 Discontinuities in the optimal controls
- 9 Infinite-horizon problems
- 10 Three special topics
- Bibliography
- Index
Summary
In this chapter we present a first account of optimal control theory. The maximum principle is the central result of the theory. (It was originally developed by Pontryagin and his associates; see Pontryagin et al., 1962.) To help the reader become thoroughly acquainted with it, we proceed with the analysis of a simple case, without paying undue attention to some technical regularity conditions. (These and other matters will be dealt with in Chapter 6.)
A simple control problem
Consider a dynamic system – for instance, a moving spaceship or an economy. Some variables can be identified that describe the state of the system: they are called state variables – for instance, the distance of the spaceship from earth or the stock of goods present in the economy. The rate of change over time in the value of a state variable may depend on the value of that variable, time itself, or some other variables, which can be controlled at any time by the operator of the system. These other variables are called control variables – for instance, the pitch of the motor or the flow of goods consumed at any instant. The equations describing the rate of change in the state variables are usually differential equations, as discussed in Chapter 2. Once values are chosen for the control variables (at each date), the rates of change in the values of the state variables are thus determined at any time, and given the initial value for the state variables, so are all future values.
- Type
- Chapter
- Information
- Optimal Control Theory and Static Optimization in Economics , pp. 127 - 168Publisher: Cambridge University PressPrint publication year: 1992