facebookresearch / how-to-autorlView external linksLinks
Plug-and-play hydra sweepers for the EA-based multifidelity method DEHB and several population-based training variations, all proven to efficiently tune RL hyperparameters.
☆85Nov 27, 2023Updated 2 years ago
Alternatives and similar repositories for how-to-autorl
Users that are interested in how-to-autorl are comparing it to the libraries listed below
Sorting:
- Directed masked autoencoders☆14Feb 5, 2026Updated last week
- Modular Single-file Reinfocement Learning Algorithms Library☆38May 16, 2023Updated 2 years ago
- Temporally Correlated Episodic Reinforcement Learning, ICLR 24☆12Apr 8, 2024Updated last year
- Causal Analysis of Agent Behavior for AI Safety☆19Jun 27, 2023Updated 2 years ago
- ☆91Jan 27, 2026Updated 3 weeks ago
- ☆251Nov 19, 2024Updated last year
- Official code release for "CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity"☆87Jun 4, 2024Updated last year
- Tutorials on how to use EAGERx☆16Aug 14, 2025Updated 6 months ago
- VC-FB and MC-FB algorithms from "Zero-Shot Reinforcement Learning from Low Quality Data" (NeurIPS 2024)☆22Jan 14, 2025Updated last year
- ☆18Jul 24, 2023Updated 2 years ago
- Reinforcement Learning inside a 3D soccer simulation☆37Sep 15, 2024Updated last year
- Parallel Q-Learning: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation☆76Aug 2, 2023Updated 2 years ago
- A benchmark library for Dynamic Algorithm Configuration.☆33Updated this week
- Simple single-file baselines for Q-Learning in pure-GPU setting☆234Nov 24, 2025Updated 2 months ago
- Code to accompany the paper "The Information Geometry of Unsupervised Reinforcement Learning"☆20Oct 6, 2021Updated 4 years ago
- ☆17Sep 28, 2023Updated 2 years ago
- Bridging State and History Representations: Understanding Self-Predictive RL, ICLR 2024☆24Apr 7, 2024Updated last year
- ☆19Mar 1, 2023Updated 2 years ago
- Official pytorch implementation for our ICLR 2023 paper "Latent State Marginalization as a Low-cost Approach for Improving Exploration".☆24Feb 9, 2023Updated 3 years ago
- ☆19Jul 24, 2023Updated 2 years ago
- Drop-in environment replacements that make your RL algorithm train faster.☆21Jun 19, 2024Updated last year
- A framework for Reinforcement Learning research.☆246Feb 9, 2026Updated last week
- This repository is the official implementation of the TRAC optimizer in Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement …☆32May 2, 2025Updated 9 months ago
- ☆91Jan 21, 2026Updated 3 weeks ago
- [NeurIPS 2022] Open source code for reusing prior computational work in RL.☆100Jul 5, 2023Updated 2 years ago
- The code for the paper "A Bayesian Approach to Online Planning" published in ICML 2024.☆13Jun 17, 2024Updated last year
- RL Environments in JAX 🌍☆857May 30, 2025Updated 8 months ago
- code associated with paper "Sparse Bayesian Optimization"☆26Oct 31, 2023Updated 2 years ago
- Code for the paper "Batch size invariance for policy optimization"☆56Apr 2, 2023Updated 2 years ago
- Really Fast End-to-End Jax RL Implementations☆1,022Sep 9, 2024Updated last year
- Image-based gridworld experiment for learning Markov state abstractions☆21Sep 16, 2024Updated last year
- Challenging Memory-based Deep Reinforcement Learning Agents☆109Oct 27, 2024Updated last year
- Library for the Test-based Calibration Error (TCE) metric to quantify the degree to classifier calibration.☆13Sep 15, 2023Updated 2 years ago
- Submission Under Review☆17May 15, 2025Updated 9 months ago
- ☆11Oct 19, 2023Updated 2 years ago
- An efficient solver for nonlinear constrained feedback Stackelberg games☆11Feb 25, 2025Updated 11 months ago
- Official implementation for "How Should We Meta-Learn Reinforcement Learning Algorithms?"☆23Sep 7, 2025Updated 5 months ago
- ☆11Oct 20, 2023Updated 2 years ago
- [AutoML'22] Bayesian Generational Population-based Training (BG-PBT)☆29Sep 16, 2022Updated 3 years ago