Policy Gradient Methods

Policy Gradient Methods

Policy Gradient Methods In summary, I guess because 1. policy (probability of action) has the style: , 2. obtain (or let’s say ‘math trick’) in the objective function ( i.e., value function )’s gradient equation to get an ‘Expectation’ form for : , assign ‘ln’ to policy before gradient for analysis convenience. pg Notation J(θ):… read more »

Hierarchical Policy Gradient Algorithms

Hierarchical Policy Gradient Algorithms Math Notation M : the overall task MDP. {M0, M1, M2 , M3 , . . . , Mn } : a finite set of subtask MDPs. Mi : subtask, models a subtask in the hierarchy. M0 : root task and solving it solves the entire MDP M. i : non-primitive subtask, paper uses… read more »

Sidebar