Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms
Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms finite-sample convergence rates for q-learning and indirect algorithms
Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms finite-sample convergence rates for q-learning and indirect algorithms
Solving H-horizon, Stationary Markov Decision Problems In Time Proportional To Log(H) Solving h-horizon, stationary markov decision problems in time proportional to log (h) Paul Tseng, Operations Reseserch Letters 9 (1990) 287-297.
Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time The nonlinear Bellman equation = linear programming problem: Primal-Dual LP Primal LP (1) Dual LP (2) Minmax Problem (3)
The Asymptotic Convergence-Rate of Q-learning the-asymptotic-convergence-rate-of-q-learning The asymptotic rate of convergence of Q-learning is Ο( 1/tR(1-γ) ), if R(1-γ)<0.5, where R=Pmin/Pmax, P is state-action occupation frequency. |Qt (x,a) − Q*(x,a)| < B/tR(1-γ) Convergence-rate is the difference between True value and Optimum value, i.e., the smaller it is, the faster convergence Q-learning is. We hope the Ο( 1/tR(1-γ) ) should… read more »