Pytorch Implementation of Stochastic MuZero for gym environment. This algorithm is capable of supporting a wide range of action and observation spaces, including both discrete and continuous variations.
☆77Dec 31, 2025Updated 2 months ago
Alternatives and similar repositories for Stochastic-muzero
Users that are interested in Stochastic-muzero are comparing it to the libraries listed below
Sorting:
- Pytorch Implementation of MuZero Unplugged for gym environment. This algorithm is capable of supporting a wide range of action and observ…☆35Jun 25, 2025Updated 8 months ago
- A number of agents (PPO, MuZero) with a Perceiver-based NN architecture that can be trained to achieve goals in nethack/minihack environm…☆43Sep 19, 2022Updated 3 years ago
- A C++ pytorch implementation of MuZero☆40May 1, 2024Updated last year
- A project that provides help for using DeepMind's mctx on gym-style environments.☆65Nov 14, 2024Updated last year
- Pytorch Implementation of MuZero for gym environment. It support any Discrete , Box and Box2D configuration for the action space and obse…☆19Jan 24, 2023Updated 3 years ago
- Pytorch Implementation of MuZero☆352Jul 23, 2023Updated 2 years ago
- [IEEE ToG] MiniZero: An AlphaZero and MuZero Training Framework☆125Feb 25, 2026Updated 2 weeks ago
- [ICML 2024, Spotlight] EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data☆106Aug 9, 2024Updated last year
- Using a modified version of Werner Duvaud's MuZero implementation (https://github.com/werner-duvaud/muzero-general) this reinforcement ag…☆18Jun 30, 2021Updated 4 years ago
- A clean implementation of MuZero and AlphaZero following the AlphaZero General framework. Train and Pit both algorithms against each othe…☆168Mar 28, 2021Updated 4 years ago
- ♟️ Vectorized RL game environments in JAX☆592Mar 6, 2025Updated last year
- Classic MCTS example with mctx☆24May 25, 2023Updated 2 years ago
- Implementation of some of the Deep Distributional Reinforcement Learning Algorithms.☆25Jun 17, 2025Updated 9 months ago
- Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.☆926Dec 20, 2023Updated 2 years ago
- Advantage Alignment Algorithms (ICLR 2025 oral)☆17Apr 7, 2025Updated 11 months ago
- Swarm learning algorithm☆11Jun 2, 2021Updated 4 years ago
- MuZero☆2,785Sep 3, 2024Updated last year
- Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023☆30Jul 18, 2023Updated 2 years ago
- Deep memory and sequence models in JAX☆23Jan 15, 2026Updated 2 months ago
- ☆54Apr 11, 2023Updated 2 years ago
- ☆12Apr 22, 2022Updated 3 years ago
- Implementation of MuZero with PyTorch, based on the pseudocode from DeepMind (https://arxiv.org/src/1911.08265v2/anc/pseudocode.py).☆33Aug 14, 2022Updated 3 years ago
- A high throughput, end-to-end RL library for infinite-horizon tasks.☆23Oct 22, 2025Updated 4 months ago
- Adding Dreamer-v3's implementation tricks to CleanRL's PPO☆14May 19, 2023Updated 2 years ago
- ☆14Aug 18, 2023Updated 2 years ago
- Code for our TMLR paper "Distributional GFlowNets with Quantile Flows".☆13Feb 14, 2024Updated 2 years ago
- A PyTorch implementation of DeepMind's MuZero agent☆37Dec 1, 2023Updated 2 years ago
- Deep Reinforcement Learning Framework done with PyTorch☆43Mar 12, 2025Updated last year
- A PyTorch implementation of SEED, originally created by Google Research for TensorFlow 2.☆15Dec 8, 2020Updated 5 years ago
- 🏛️A research-friendly codebase for fast experimentation of single-agent reinforcement learning in JAX • End-to-End JAX RL☆397Mar 1, 2026Updated 2 weeks ago
- AlphaZero for continuous control tasks☆23Dec 7, 2022Updated 3 years ago
- PyTorch Implementation of the Maximum a Posteriori Policy Optimisation☆84Nov 19, 2022Updated 3 years ago
- ☆22Jan 15, 2026Updated 2 months ago
- A method to train DRL model with Tensorflow and Bizhawk.☆25Nov 12, 2019Updated 6 years ago
- ☆92Feb 16, 2026Updated last month
- ☆20May 22, 2022Updated 3 years ago
- Code for Model-Free Opponent Shaping (ICML 2022)☆20Nov 18, 2022Updated 3 years ago
- ☆22Aug 10, 2022Updated 3 years ago
- Drop-in environment replacements that make your RL algorithm train faster.☆21Jun 19, 2024Updated last year