Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

Sharada Mohanty, Jyotish Poonganam, Adrien Gaidon, Andrey Kolobov, Blake Wulfe, Dipam Chakraborty, Graz̆vydas S̆emetulskis, João Schapke, Jonas Kubilius, Jurgis Paükonis, Linas Klimas, Matthew Hausknecht, Patrick MacAlpine, Quang Nhat Tran, Thomas Tumiel, Xiaocheng Tang, Xinwei Chen, Christopher Hesse, Jacob Hilton, William Hebgen Guss, Sahika Genc, John Schulman, Karl Cobbe
Proceedings of the NeurIPS 2020 Competition and Demonstration Track, PMLR 133:361-395, 2021.

Abstract

The NeurIPS 2020 Procgen Competition was designed as a centralized benchmark with clearly defined tasks for measuring Sample Efficiency and Generalization in Reinforcement Learning. Generalization remains one of the most fundamental challenges in deep reinforcement learning, and yet we do not have enough benchmarks to measure the progress of the community on Generalization in Reinforcement Learning. We present the design of a centralized benchmark for Reinforcement Learning which can help measure Sample Efficiency and Generalization in Reinforcement Learning by doing end to end evaluation of the training and rollout phases of thousands of user submitted code bases in a scalable way. We designed the benchmark on top of the already existing Procgen Benchmark by defining clear tasks and standardizing the end to end evaluation setups. The design aims to maximize the flexibility available for researchers who wish to design future iterations of such benchmarks, and yet imposes necessary practical constraints to allow for a system like this to scale. This paper presents the competition setup and the details and analysis of the top solutions identified through this setup in context of 2020 iteration of the competition at NeurIPS.

Cite this Paper


BibTeX
@InProceedings{pmlr-v133-mohanty21a, title = {Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark}, author = {Mohanty, Sharada and Poonganam, Jyotish and Gaidon, Adrien and Kolobov, Andrey and Wulfe, Blake and Chakraborty, Dipam and \u{S}emetulskis, Gra\u{z}vydas and Schapke, Jo\~{a}o and Kubilius, Jonas and Pa\"ukonis, Jurgis and Klimas, Linas and Hausknecht, Matthew and MacAlpine, Patrick and Tran, Quang Nhat and Tumiel, Thomas and Tang, Xiaocheng and Chen, Xinwei and Hesse, Christopher and Hilton, Jacob and Guss, William Hebgen and Genc, Sahika and Schulman, John and Cobbe, Karl}, booktitle = {Proceedings of the NeurIPS 2020 Competition and Demonstration Track}, pages = {361--395}, year = {2021}, editor = {Escalante, Hugo Jair and Hofmann, Katja}, volume = {133}, series = {Proceedings of Machine Learning Research}, month = {06--12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v133/mohanty21a/mohanty21a.pdf}, url = {https://proceedings.mlr.press/v133/mohanty21a.html}, abstract = {The NeurIPS 2020 Procgen Competition was designed as a centralized benchmark with clearly defined tasks for measuring Sample Efficiency and Generalization in Reinforcement Learning. Generalization remains one of the most fundamental challenges in deep reinforcement learning, and yet we do not have enough benchmarks to measure the progress of the community on Generalization in Reinforcement Learning. We present the design of a centralized benchmark for Reinforcement Learning which can help measure Sample Efficiency and Generalization in Reinforcement Learning by doing end to end evaluation of the training and rollout phases of thousands of user submitted code bases in a scalable way. We designed the benchmark on top of the already existing Procgen Benchmark by defining clear tasks and standardizing the end to end evaluation setups. The design aims to maximize the flexibility available for researchers who wish to design future iterations of such benchmarks, and yet imposes necessary practical constraints to allow for a system like this to scale. This paper presents the competition setup and the details and analysis of the top solutions identified through this setup in context of 2020 iteration of the competition at NeurIPS.} }
Endnote
%0 Conference Paper %T Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark %A Sharada Mohanty %A Jyotish Poonganam %A Adrien Gaidon %A Andrey Kolobov %A Blake Wulfe %A Dipam Chakraborty %A Graz̆vydas S̆emetulskis %A João Schapke %A Jonas Kubilius %A Jurgis Paükonis %A Linas Klimas %A Matthew Hausknecht %A Patrick MacAlpine %A Quang Nhat Tran %A Thomas Tumiel %A Xiaocheng Tang %A Xinwei Chen %A Christopher Hesse %A Jacob Hilton %A William Hebgen Guss %A Sahika Genc %A John Schulman %A Karl Cobbe %B Proceedings of the NeurIPS 2020 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2021 %E Hugo Jair Escalante %E Katja Hofmann %F pmlr-v133-mohanty21a %I PMLR %P 361--395 %U https://proceedings.mlr.press/v133/mohanty21a.html %V 133 %X The NeurIPS 2020 Procgen Competition was designed as a centralized benchmark with clearly defined tasks for measuring Sample Efficiency and Generalization in Reinforcement Learning. Generalization remains one of the most fundamental challenges in deep reinforcement learning, and yet we do not have enough benchmarks to measure the progress of the community on Generalization in Reinforcement Learning. We present the design of a centralized benchmark for Reinforcement Learning which can help measure Sample Efficiency and Generalization in Reinforcement Learning by doing end to end evaluation of the training and rollout phases of thousands of user submitted code bases in a scalable way. We designed the benchmark on top of the already existing Procgen Benchmark by defining clear tasks and standardizing the end to end evaluation setups. The design aims to maximize the flexibility available for researchers who wish to design future iterations of such benchmarks, and yet imposes necessary practical constraints to allow for a system like this to scale. This paper presents the competition setup and the details and analysis of the top solutions identified through this setup in context of 2020 iteration of the competition at NeurIPS.
APA
Mohanty, S., Poonganam, J., Gaidon, A., Kolobov, A., Wulfe, B., Chakraborty, D., S̆emetulskis, G., Schapke, J., Kubilius, J., Paükonis, J., Klimas, L., Hausknecht, M., MacAlpine, P., Tran, Q.N., Tumiel, T., Tang, X., Chen, X., Hesse, C., Hilton, J., Guss, W.H., Genc, S., Schulman, J. & Cobbe, K.. (2021). Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark. Proceedings of the NeurIPS 2020 Competition and Demonstration Track, in Proceedings of Machine Learning Research 133:361-395 Available from https://proceedings.mlr.press/v133/mohanty21a.html.

Related Material