Robotics & Perception/Probabilistic Robotics
(작성중) Optimal Estimation Algorithms: Kalman and Particle Filters
Kalman filter: Gaussian distribution Particle filter: Sampling-based algorithm
[Probabilistic Robotics] Planning and Control: Partially Observable Markov Decision Processes
이 글은 Probabilistic Robotics Chapter 16. Partially Observable Markov Decision Processes를 읽고 정리한 글입니다. To choose the right action, we need accurate state estimation. We need information-gathering tasks, such as robot exploration, for precise state estimation (i.e. reduce uncertainty). There are two types of uncertainty: uncertainty in action, and uncertainty in perception. Uncertainty in action) D..
[Probabilistic Robotics] Planning and Control: Uncertainty in action/Belief space
blockquote data-ke-style="style2">이 글은 Probabilistic Robotics Chapter 15. Markov Decision Processes를 읽고 정리한 글입니다. To choose the right action, we need accurate state estimation. We need information-gathering tasks, such as robot exploration, for precise state estimation (i.e. reduce uncertainty). There are two types of uncertainty: uncertainty in action, and uncertainty in perception. Uncertainty..
[Robotics] MDP, POMDP 정리
이 글은 David Silver의 강화학습 slide와 이 사이트를 바탕으로 작성한 글입니다. ✂️ Markov decision process (MDP) Markov decision process is a Markov reward process with decisions. It is an environment in which all states are Markov. Markov Decision Process is a tuple $$$$ S: states, A: actions, P: state transition probability matrix R: reward function, gamma: discount factor Policy pi is a distribution over actions given ..