bmazoure / ppo_jaxLinks
Jax implementation of Proximal Policy Optimization (PPO) specifically tuned for Procgen, with benchmarked results and saved model weights on all environments.
β59Updated 3 years ago
Alternatives and similar repositories for ppo_jax
Users that are interested in ppo_jax are comparing it to the libraries listed below
Sorting:
- Baselines for gymnax π€β73Updated 2 years ago
- JAX implementations of core Deep RL algorithmsβ82Updated 3 years ago
- General Modules for JAXβ71Updated 3 months ago
- A collection of RL algorithms written in JAX.β104Updated 3 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRLβ120Updated last year
- Vectorization techniques for fast population-based training.β56Updated 3 years ago
- JAX implementation of deep RL agents with resets from the paper "The Primacy Bias in Deep Reinforcement Learning"β103Updated 3 years ago
- An implementation of MuZero in JAX.β58Updated 3 years ago
- Accelerated replay buffers in JAXβ45Updated 3 years ago
- β89Updated 3 months ago
- Implementations of robust Dual Curriculum Design (DCD) algorithms for unsupervised environment design.β138Updated last year
- Jax-Baseline is a Reinforcement Learning implementation using JAX and Flax/Haiku libraries, mirroring the functionality of Stable-Baselinβ¦β62Updated 3 weeks ago
- This code implements Prioritized Level Replay, a method for sampling training levels for reinforcement learning agents that exploits the β¦β92Updated 4 years ago
- β88Updated last year
- Docker containers of baseline agents for the Crafter environment