hijkzzz / noisy-mappoView external linksLinks
Multi-agent PPO with noise (97% win rates on Hard scenarios of SMAC)
☆76Jun 9, 2023Updated 2 years ago
Alternatives and similar repositories for noisy-mappo
Users that are interested in noisy-mappo are comparing it to the libraries listed below
Sorting:
- Implementations of MAPPO and IPPO on SMAC, the multi-agent StarCraft environment.☆77Mar 25, 2022Updated 3 years ago
- We extend pymarl2 to pymarl3, equipping the MARL algorithms with permutation invariance and permutation equivariance properties. The enh…☆173Jan 7, 2024Updated 2 years ago
- This is the official implementation of Multi-Agent PPO (MAPPO).☆1,884Jul 18, 2024Updated last year
- Codes accompanying the paper "ROMA: Multi-Agent Reinforcement Learning with Emergent Roles" (ICML 2020 https://arxiv.org/abs/2003.08039)☆168Dec 8, 2022Updated 3 years ago
- Codebase for [Order Matters: Agent-by-agent Policy Optimization](https://openreview.net/forum?id=Q-neeWNVv1)☆32Nov 22, 2025Updated 2 months ago
- ☆11Apr 23, 2021Updated 4 years ago
- Fine-tuned MARL algorithms on SMAC (100% win rates on most scenarios)☆708May 18, 2024Updated last year
- (AAAI24 oral) Implementation of RPPO(Risk-sensitive PPO) and RPBT(Population-based self-play with RPPO)☆12May 22, 2023Updated 2 years ago
- ☆222Jun 4, 2023Updated 2 years ago
- Concise pytorch implements of MARL algorithms, including MAPPO, MADDPG, MATD3, QMIX and VDN.☆714Oct 13, 2022Updated 3 years ago
- ☆12Aug 15, 2020Updated 5 years ago
- A deep reinforcement learning multi-agent algorithm, where a team learns to complete a task and communicate between agents.