Book contents
- Frontmatter
- Contents
- Credits and Acknowledgments
- Introduction
- 1 Distributed Constraint Satisfaction
- 2 Distributed Optimization
- 3 Introduction to Noncooperative Game Theory: Games in Normal Form
- 4 Computing Solution Concepts of Normal-Form Games
- 5 Games with Sequential Actions: Reasoning and Computing with the Extensive Form
- 6 Richer Representations: Beyond the Normal and Extensive Forms
- 7 Learning and Teaching
- 8 Communication
- 9 Aggregating Preferences: Social Choice
- 10 Protocols for Strategic Agents: Mechanism Design
- 11 Protocols for Multiagent Resource Allocation: Auctions
- 12 Teams of Selfish Agents: An Introduction to Coalitional Game Theory
- 13 Logics of Knowledge and Belief
- 14 Beyond Belief: Probability, Dynamics, and Intention
- Appendices: Technical Background
- A Probability Theory
- B Linear and Integer Programming
- C Markov Decision Problems (MDPs)
- D Classical Logic
- Bibliography
- Index
C - Markov Decision Problems (MDPs)
from Appendices: Technical Background
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Credits and Acknowledgments
- Introduction
- 1 Distributed Constraint Satisfaction
- 2 Distributed Optimization
- 3 Introduction to Noncooperative Game Theory: Games in Normal Form
- 4 Computing Solution Concepts of Normal-Form Games
- 5 Games with Sequential Actions: Reasoning and Computing with the Extensive Form
- 6 Richer Representations: Beyond the Normal and Extensive Forms
- 7 Learning and Teaching
- 8 Communication
- 9 Aggregating Preferences: Social Choice
- 10 Protocols for Strategic Agents: Mechanism Design
- 11 Protocols for Multiagent Resource Allocation: Auctions
- 12 Teams of Selfish Agents: An Introduction to Coalitional Game Theory
- 13 Logics of Knowledge and Belief
- 14 Beyond Belief: Probability, Dynamics, and Intention
- Appendices: Technical Background
- A Probability Theory
- B Linear and Integer Programming
- C Markov Decision Problems (MDPs)
- D Classical Logic
- Bibliography
- Index
Summary
We briefly review the main ingredients of Markov Decision Problems or MDPs, which, as we discuss in Chapter 6, can be viewed as single-agent stochastic games. The literature on MDPs is rich, and the reader is referred to the many textbooks on the subject for further reading.
The model
An MDP is a model for decision making in an uncertain, dynamic world. The (single) agent starts out in some state, takes an action, and receives some immediate rewards. The state then transitions probabilistically to some other state and the process repeats. Formally speaking, an MDP is a tuple (S,A,p,R). S is a set of states and A is a set of actions. The function p : S × A × S ↦ ℝ specifies the transition probability among states: p(s, a, s′) is the probability of ending in state s′ when taking action a in state s. Finally, the function R : S × A ↦ ℝ returns the reward for each state-action pair.
- Type
- Chapter
- Information
- Multiagent SystemsAlgorithmic, Game-Theoretic, and Logical Foundations, pp. 455 - 456Publisher: Cambridge University PressPrint publication year: 2008