开发者

How to Learn the Reward Function in a Markov Decision Process

What's the appropriate way to update your R(s) function during Q-learning? For example, say an agent visits state s1 five times, and receives rewards [0,0,1,1,0]. Shou开发者_StackOverflowld I calculate the mean reward, e.g. R(s1) = sum([0,0,1,1,0])/5? Or should I use a moving average that gives greater weight to the more recent reward values received for that state? Most of the descriptions of Q-learning I've read treat R(s) as some sort of constant, and never seem to cover how you might learn this value over time as experience is accumulated.

EDIT: I may be confusing the R(s) in Q-Learning with R(s,s') in a Markov Decision Process. The question remains similar. When learning an MDP, what's the best way to update R(s,s')?


Q-Learning keeps a running average of action values for each state under the greedy policy. It computes these values based on rewards from each pair of steps. State value under the greedy policy is equal to the value of the best action. The canonical description of Q-Learning is given in Reinforcement Learning: An Introduction.

There is no "best" way to update, but SARSA is a good default. SARSA is similar to Q-Learning, except that it learns the policy it follows, rather than the greedy policy.


In standard model-free RL (like Q-learning), you do not learn the reward function. What you learn is the value function or q-value function. Rewards are obtained by interacting with the environment and you estimate the expected value of accumulated rewards over time (discounted) for state-actions pairs.

If you are using model-based approaches, this is different and you try to learn a model of the environment, that is: transition and rewards function. But this is not the case of Q-learning.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜