Abstract: Experience replay plays a crucial role in improving the sample efficiency of deep reinforcement learning agents. Recent advances in experience replay propose using Mixup (Zhang et al., 2018) to further improve sample efficiency via synthetic sample generation. We build upon this technique with Neighborhood Mixup Experience Replay (NMER), a geometrically-grounded replay buffer that interpolates transitions with their closest neighbors in state-action space. NMER preserves a locally linear approximation of the transition manifold by only applying Mixup between transitions with vicinal state-action features. Under NMER, a given transition's set of state action neighbors is dynamic and episode agnostic, in turn encouraging greater policy generalizability via inter-episode interpolation. We combine our approach with recent off-policy deep reinforcement learning algorithms and evaluate on continuous control environments. We observe that NMER improves sample efficiency by an average 94% (TD3) and 29% (SAC) over baseline replay buffers, enabling agents to effectively recombine previous experiences and learn from limited data.
Neighborhood Mixup Experience Replay (NMER)
Ryan Sander1, Wilko Schwarting1, Tim Seyde1, Igor Gilitschenski1, Sertac Karaman2, Daniela Rus1
1 - MIT CSAIL, 2 - MIT LIDS
Paper (L4DC 2022) | Technical Report | AirXv | Website
What is NMER? NMER is a novel replay buffer technique designed for improving continuous control tasks that recombines previous experiences of deep reinforcement learning agents linearly through a simple geometric heuristic.
Code Release
Code release coming soon!
Trained Agents
Videos coming soon.
Paper
Please find our 2021 NeurIPS Deep RL Workshop paper, as well as our supplementary technical report.
If you find NMER useful, please consider citing our paper as:
@inproceedings{
sander2021neighborhood,
title={Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks},
author={Ryan Sander and Wilko Schwarting and Tim Seyde and Igor Gilitschenski and Sertac Karaman and Daniela Rus},
booktitle={Deep RL Workshop NeurIPS 2021},
year={2021},
url={https://openreview.net/forum?id=jp9NJIlTK-t}
}
Acknowledgements
This research was supported by the Toyota Research Institute (TRI). This article solely reflects the opinions and conclusions of its authors and not TRI, Toyota, or any other entity. We thank TRI for their support. The authors thank the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC and consultation resources that have contributed to the research results reported within this publication.