RL
[David Silver] 7. Policy Gradient: REINFORCE, Actor-Critic, NPG
이 글은 필자가 David Silver의 Reinforcement Learning 강좌를 듣고 정리한 글입니다. Last lecture, we approximated the value or action-value function using parameters $\theta$. Policy was generated directly from the value function. In this lecture, we will directly parameterise the policy as stochastic $\pi_\theta(s,a) = \mathbb{P} [a|s, \theta]$ This taxonomy explains Value-based and Policy-based RL well. Value-base..
[David Silver] 6. Value Function Approximation: Experiment Replay, Deep Q-Network (DQN)
이 글은 필자가 David Silver의 Reinforcement Learning 강좌를 듣고 정리한 글입니다. This lecture suggests the solution for large MDPs using function approximation.We have to scale up the model-free methods for prediction and control. So for lecture 6 and 7, we will learn how can we scale up the model-free methods. How have we dealt with small (not large) MDPs so far? We have represented the value function by a looku..