Static Wikipedia February 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu

Web Analytics
Cookie Policy Terms and Conditions Markov decision process - Wikipedia, the free encyclopedia

Markov decision process

From Wikipedia, the free encyclopedia

Markov decision processes (MDPs) provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of the decision maker. MDPs are useful for studying a wide range of optimization problems solved via dynamic programming and reinforcement learning. MDPs were known at least as early as in the fifties (cf. Bellman 1957). Much research in the area was spawned due to the book by Ronald A. Howard 1960. Today they are used in a variety of areas, including robotics, automated control, economics and in manufacturing.

More precisely a Markov Decision Process is a discrete time stochastic control process characterized by a set of states; in each state there are several actions from which the decision maker must choose. For a state s and an action a, a state transition function Pa(s) determines the transition probabilities to the next state. The decision maker earns a reward for each state visited. The states of an MDP possess the Markov property. This means that if the current state of the MDP at time t is known, transitions to a new state at time t + 1 are independent of all previous states.

Contents

[edit] Definition

A Markov Decision Process is a tuple (S,A,P_\cdot(\cdot,\cdot),R(\cdot)), where

  • S is the State space,
  • A is the action space,
  • P_a(s,s') = \Pr(s_{t+1}=s' | s_t = s,\, a_t=a) is the probability that action a in state s at time t will lead to state s' at time t + 1,
  • R(s) is the immediate reward received in state s.

The goal is to maximize some cumulative function of the rewards, typically the discounted sum under a discounting factor γ (usually just under 1). This would look like:

  • \sum^{\infty}_{t=0}\gamma^t R(s_t)

where γ is the discount rate and satisfies 0 < \gamma \le 1. The sum is to be maximised.

Markov decision processes are an extension of Markov chains; the difference is the addition of actions (allowing choice) and rewards (giving motivation). If there were only one action, or if the action to take were somehow fixed for each state, a Markov decision process would reduce to a Markov chain.

[edit] Solution

The solution to a Markov Decision Process can be expressed as a policy π, which gives the action to take for a given state, regardless of prior history. Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain.

The standard family of algorithms to calculate the policy requires storage for two arrays indexed by state: value V, which contains real values, and policy π which contains actions. At the end of the algorithm, π will contain the solution and V(s0) will contain the discounted sum of the rewards to be earned (on average) by following that solution.

The algorithm then has the following two kinds of steps, which are repeated in some order for all the states until no further changes take place.

\pi(s) := \arg \max_a \sum_{s'} P_a(s,s') V(s')
V(s): = R(s) + γ Pπ(s)(s,s')V(s')
s'

Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.

[edit] Notable variants

[edit] Value iteration

In value iteration (Bellman 1957), the π array is not used; instead, the value of π(s) is calculated whenever it is needed.

Substituting the calculation of π(s) into the calculation of V(s) gives the combined step:

V(s): = R(s) + γmaxa Pa(s,s')V(s')
s'

[edit] Policy iteration

In policy iteration (Howard 1960), step one is performed once, and then step two is repeated until it converges. Then step one is again performed once and so on.

Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations.

This variant has the advantage that there is a definite stopping condition: when the array π does not change in the course of applying step 1 to all states, the algorithm is completed.

[edit] Modified policy iteration

In modified policy iteration (Puterman and Shin 1978), step one is performed once, and then step two is repeated several times. Then step one is again performed once and so on.

[edit] Prioritized sweeping

In this variant, the steps are preferentially applied to states which are in some way important - whether based on the algorithm (there were large changes in V or π around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).

[edit] Extensions

[edit] Partial observability

The solution above assumes that the state s is known when action is to be taken; otherwise π(s) cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.

[edit] Learning

If the probabilities are unknown, the problem is one of reinforcement learning.

For this purpose it is useful to define a further function, which corresponds to taking the action a and then continuing optimally (or according to whatever policy one currently has):

Q(s,a) = R(s) + γ Pa(s,s')V(s')
s'

While this function is also unknown, experience during learning is based on (s,a) pairs (together with the outcome s'); that is, "I was in state s and I tried doing a (and s' happened)". Thus, one has an array Q and uses experience to update it directly. This is known as Q-learning.

[edit] Minor extensions

These extensions are minor in that they complicate the notation, but make no real difference to the problem or its solution.

  • The reward may be a function of the action as well as the state, R(s,a).
  • The reward may be a function of the resulting state as well as the action and state, R(s,a,s').
  • The action space may be different at each state, so that it is As rather than A

[edit] Alternative notations

The notation for MDPs is not entirely settled; there are two main streams — one using action, reward, value and γ, while the other uses control, cost, cost-to-go and α. In addition, the notation for the transition probability varies.

in this article alternative comment
action a control u
reward R cost g g is the negative of R
value V cost-to-go J J is the negative of V
policy π policy μ
discounting factor γ discounting factor α
transition probability Pa(s,s') transition probability pss'(a)

In addition, transition probability is sometimes written Pr(s,a,s'), Pr(s' | s,a) or, rarely, ps's(a)

[edit] References

  • R. Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics 6, 1957.
  • R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ.
  • Ronald A. Howard Dynamic Programming and Markov Processes, The M.I.T. Press, 1960.
  • M. L. Puterman. Markov Decision Processes. Wiley, 1994.
  • H.C. Tijms. A First Course in Stochastic Models. Wiley, 2003.
  • Sutton, R.S. On the significance of Markov decision processes . In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud (Eds.) Artificial Neural Networks -- ICANN'97, pp. 273-282. Springer.
  • Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998

[edit] See also

[edit] External links

In other languages
Static Wikipedia 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2007 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2006 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu