NousResearch / atroposLinks
Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse environments
☆726Updated this week
Alternatives and similar repositories for atropos
Users that are interested in atropos are comparing it to the libraries listed below
Sorting:
- Async RL Training at Scale☆722Updated this week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,194Updated 2 weeks ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆450Updated last year
- rl from zero pretrain, can it be done? yes.☆277Updated 3 weeks ago
- Training-Ready RL Environments + Evals☆132Updated this week
- Post-training with Tinker☆1,096Updated this week
- An interface library for RL post training with environments.☆66Updated this week
- ☆229Updated 4 months ago
- Inference-time scaling for LLMs-as-a-judge.☆303Updated 3 weeks ago
- Testing baseline LLMs performance across various models☆319Updated 2 weeks ago
- ☆135Updated 7 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆560Updated 2 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆312Updated 4 months ago
- Exploring Applications of GRPO☆248Updated 2 months ago
- Frontier Models playing the board game Diplomacy.☆592Updated last month
- System 2 Reasoning Link Collection☆857Updated 7 months ago
- ⚖️ Awesome LLM Judges ⚖️☆132Updated 5 months ago
- Build your own visual reasoning model☆413Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆553Updated 2 months ago
- An open infrastructure to democratize and decentralize the development of superintelligence for humanity.☆500Updated this week
- Distributed Training Over-The-Internet☆961Updated last week
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆325Updated last year
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆836Updated last week
- prime is a framework for efficient, globally distributed training of AI models over the internet.☆837Updated 5 months ago
- Automatic evals for LLMs☆547Updated 3 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆406Updated 3 weeks ago
- ☆105Updated this week
- ☆124Updated 10 months ago
- Tzafon-WayPoint is a robust, scalable solution for managing large fleets of browser instances. WayPoint stands out with unmatched cold‑st…☆74Updated 6 months ago