ActorCritic
[David Silver] 7. Policy Gradient: REINFORCE, Actor-Critic, NPG
이 글은 필자가 David Silver의 Reinforcement Learning 강좌를 듣고 정리한 글입니다. Last lecture, we approximated the value or action-value function using parameters $\theta$. Policy was generated directly from the value function. In this lecture, we will directly parameterise the policy as stochastic $\pi_\theta(s,a) = \mathbb{P} [a|s, \theta]$ This taxonomy explains Value-based and Policy-based RL well. Value-base..