CLAIRE-Labo / no-representation-no-trustLinks
Codebase to fully reproduce the results of "No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO" (Moalla et al. 2024). Uses TorchRL and provides extensive tools for studying representation dynamics in policy optimization.
☆27Updated 8 months ago
Alternatives and similar repositories for no-representation-no-trust
Users that are interested in no-representation-no-trust are comparing it to the libraries listed below
Sorting:
- Benchmarking RL for POMDPs in Pure JAX [Code for "Structured State Space Models for In-Context Reinforcement Learning" (NeurIPS 2023)]☆110Updated last year
- Code for "Unsupervised Zero-Shot RL via Functional Reward Representations"☆57Updated last year
- Unified Implementations of Offline Reinforcement Learning Algorithms☆88Updated 3 months ago
- ☆31Updated 4 years ago
- Learning diverse options through the Laplacian representation.☆23Updated last year
- Code for "SimbaV2: Hyperspherical Normalization for Scalable Deep Reinforcement Learning"☆59Updated 2 months ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆114Updated 11 months ago
- Extreme Q-Learning: Max Entropy RL without Entropy☆87Updated 2 years ago
- ☆82Updated 4 months ago
- Building blocks for productive research☆59Updated last week
- Foundation Policies with Hilbert Representations (ICML 2024)☆90Updated last year
- Code for the paper "Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference"☆43Updated last year
- Contains JAX implementation of algorithms for inverse reinforcement learning☆73Updated 11 months ago
- Simple single-file baselines for Q-Learning in pure-GPU setting☆176Updated 4 months ago
- Clean single-file implementation of offline RL algorithms in JAX☆150Updated 7 months ago
- Open source code for paper "Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning" ICML 2023☆46Updated 2 months ago
- General Modules for JAX☆67Updated 4 months ago
- Recall to Imagine, a model-based RL algorithm with superhuman memory. Oral (1.2%) @ ICLR 2024☆70Updated last year
- MR.Q is a general-purpose model-free reinforcement learning algorithm.☆107Updated last month
- Plug-and-play hydra sweepers for the EA-based multifidelity method DEHB and several population-based training variations, all proven to e…☆83Updated last year
- Learning to Modulate pre-trained Models in RL (Decision Transformer, LoRA, Fine-tuning)☆59Updated 10 months ago
- Challenging Memory-based Deep Reinforcement Learning Agents☆102Updated 9 months ago
- PyTorch Package For Quasimetric Learning☆42Updated 9 months ago
- Code release for Efficient Planning in a Compact Latent Action Space (ICLR2023) https://arxiv.org/abs/2208.10291.☆109Updated 2 years ago
- ☆103Updated 5 months ago
- Official codebase for "The Generalization Gap in Offline Reinforcement Learning" accepted to ICLR 2024☆28Updated last year
- Corax: Core RL in JAX☆38Updated last year
- Action Value Gradient Algorithm☆22Updated 2 months ago
- Code and data for the paper "Bridging RL Theory and Practice with the Effective Horizon"☆48Updated last year
- MTM Masked Trajectory Models for Prediction, Representation, and Control.☆157Updated 2 years ago