NousResearch / atroposLinks
Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse environments
☆760Updated this week
Alternatives and similar repositories for atropos
Users that are interested in atropos are comparing it to the libraries listed below
Sorting:
- Async RL Training at Scale☆867Updated this week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,242Updated 3 weeks ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆452Updated last year
- Training-Ready RL Environments + Evals☆182Updated last week
- rl from zero pretrain, can it be done? yes.☆281Updated 2 months ago
- An interface library for RL post training with environments.☆789Updated this week
- Build your own visual reasoning model☆415Updated 2 weeks ago
- System 2 Reasoning Link Collection☆861Updated 8 months ago
- Testing baseline LLMs performance across various models☆322Updated 3 weeks ago
- Frontier Models playing the board game Diplomacy.☆604Updated 2 weeks ago
- Inference-time scaling for LLMs-as-a-judge.☆314Updated last month
- ☆234Updated 5 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆576Updated 3 months ago
- ☆136Updated 8 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆589Updated 4 months ago
- Exploring Applications of GRPO☆249Updated 3 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆850Updated last month
- ☆107Updated this week
- ☆128Updated 11 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆315Updated 5 months ago
- smol models are fun too☆92Updated last year
- Distributed Training Over-The-Internet☆966Updated last month
- ⚖️ Awesome LLM Judges ⚖️☆134Updated 7 months ago
- Recipes to scale inference-time compute of open models☆1,118Updated 6 months ago
- open source interpretability platform 🧠☆509Updated last week
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- Automatic evals for LLMs☆559Updated 5 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆329Updated last year
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆561Updated last month
- Super basic implementation (gist-like) of RLMs with REPL environments.☆273Updated last month