RL

Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time

Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time The nonlinear Bellman equation =  linear programming problem: Primal-Dual LP Primal LP (1) Dual LP (2)   Minmax Problem (3)    

KL Divergence

KL Divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy) is a measure of how one probability distribution is different from a second, reference probability distribution. https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence Information entropy KL Divergence  

The Asymptotic Convergence-Rate of Q-learning

The Asymptotic Convergence-Rate of Q-learning the-asymptotic-convergence-rate-of-q-learning The asymptotic rate of convergence of Q-learning is Ο( 1/tR(1-γ) ), if R(1-γ)<0.5, where R=Pmin/Pmax, P is state-action occupation frequency. |Qt (x,a) − Q*(x,a)| < B/tR(1-γ) Convergence-rate is the difference between True value and Optimum value, i.e., the smaller it is, the faster convergence Q-learning is. We hope the Ο( 1/tR(1-γ) ) should… read more »

Policy Gradient Methods

Policy Gradient Methods In summary, I guess because 1. policy (probability of action) has the style: , 2. obtain (or let’s say ‘math trick’) in the objective function ( i.e., value function )’s gradient equation to get an ‘Expectation’ form for : , assign ‘ln’ to policy before gradient for analysis convenience. pg Notation J(θ):… read more »

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation 当环境给的奖励少而延迟时,论文给出了一个解决方案:agent至始至终只有一个,但分两个阶段:1总控器阶段,选goal,2控制器,根据当前state和goal,输出action,critic判断goal是否完成或达到终态。重复1,2。总控器选一个新的goal,控制器再输出action,依次类推。我理解它把环境“分”出N个时序上的小环境,与每个小环境对应1个goal。agent实体在这种环境下可以等效为一个点。 The key is that the policy over goals πg which makes expected Q-value with discounting maximum is the policy which the agent chooses, i.e., if the goal sequence g1-g3-g2-… ‘s Q-value is the maximum value among that of all kinds of goal sequences, the agent should… read more »

Decentralized Optimal Control of Distributed Interdependent Automata With Priority Structure

Decentralized Optimal Control of Distributed Interdependent Automata With Priority Structure Data Flowchart Notation : subsystem model, the plant P i , deterministic finite-state automaton. (1)      (2) (3)   (4) : P i  can be transitioned from state  into state  if the input l is applied.   (5)   It encodes with  that the transition  is possible with at least… read more »

Neural-network-based decentralized control of continuous-time nonlinear interconnected systems with unknown dynamics

  Neural-network-based decentralized control of continuous-time nonlinear interconnected systems with unknown dynamics – Math and Optimal Control Problem formulation Consider a continuous-time nonlinear large-scale system ∑ composed of N interconnected subsystems described by (1) where xi(t) ∈ Rni : state. The overall state of the large-scale system ∑ is denoted by  ui [ xi(t) ] ∈ Rmi : control input vector of the ith… read more »

Sidebar