Variance Reduction for Reinforcement Learning
in Input-Driven Environments


Hongzi Mao      Shaileshh Bojja Venkatakrishnan      Malte Schwarzkopf      Mohammad Alizadeh     

Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology


Abstract


We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system. Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking. Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns. Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training. We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines. We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs. Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.


Paper


Variance Reduction for Reinforcement Learning in Input-Driven Environments
Hongzi Mao, Shaileshh Bojja Venkatakrishnan, Malte Schwarzkopf, Mohammad Alizadeh
Proceedings of the 7th International Conference on Learning Representations (ICLR) (2019)
[PDF]


Code


[GitHub]


Demo


[Demo]


Poster


[Poster]


Supporters


This project is supported by NSF, a Google Faculty Research Award, an AWS Machine Learning Research Award, a Cisco Research Center Award, an Alfred P. Sloan Research Fellowship and sponsors of MIT Data Systems and AI Lab.