Publications by Author: Gabriel Barth-Maron

2020
Srivatsan Krishnan, Maximilian Lam, Sharad Chitlangia, Zishen Wan, Gabriel Barth-Maron, Aleksandra Faust, and Vijay Janapa Reddi. 11/28/2020. “QuaRL: Quantization for Sustainable Reinforcement Learning”. Publisher's VersionAbstract
Deep reinforcement learning continues to show tremendous potential in achieving tasklevel autonomy, however, its computational and energy demands remain prohibitively high. In this paper, we tackle this problem by applying quantization to reinforcement learning. To that end, we introduce a novel Reinforcement Learning (RL) training paradigm, ActorQ, to speed up actor-learner distributed RL training. ActorQ leverages 8-bit quantized actors to speed up data collection without affecting learning convergence. Our quantized distributed RL training system, ActorQ, demonstrates end-to-end speedups of > 1.5 ×- 2.5 ×, and faster convergence over full precision training on a range of tasks (Deepmind Control Suite) and different RL algorithms (D4PG, DQN). Furthermore, we compare the carbon emissions (Kgs of CO2) of ActorQ versus standard reinforcement learning on various tasks. Across various settings, we show that ActorQ enables more environmentally friendly reinforcement learning by achieving 2.8× less carbon emission and energy compared to training RLagents in full-precision. Finally, we demonstrate empirically that aggressively quantized RL-policies (up to 4/5 bits) enable significant speedups on quantization-friendly (supports native quantization) resource-constrained edge devices, without degrading accuracy. We believe that this is the first of many future works on enabling computationally /energy efficient and sustainable reinforcement learning. The source code for QuaRL is available here: https://github.com/harvard-edge/QuaRL.
QuaRL: Quantization for Sustainable Reinforcement Learning