srush / awesome-o1Links
A bibliography and survey of the papers surrounding o1
☆1,216Updated last year
Alternatives and similar repositories for awesome-o1
Users that are interested in awesome-o1 are comparing it to the libraries listed below
Sorting:
- Recipes to scale inference-time compute of open models☆1,120Updated 7 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,283Updated last week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,411Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆897Updated 2 months ago
- ☆1,045Updated 5 months ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆569Updated 2 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆584Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,177Updated 4 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,394Updated last week
- Open-source framework for the research and development of foundation models.☆673Updated last week
- RewardBench: the first evaluation tool for reward models.☆672Updated 6 months ago
- System 2 Reasoning Link Collection☆863Updated 9 months ago
- ☆1,035Updated last year
- Automatic evals for LLMs☆569Updated this week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆934Updated 10 months ago
- Scalable toolkit for efficient model alignment☆847Updated 2 months ago
- O1 Replication Journey☆2,003Updated 11 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,056Updated 4 months ago
- Official Repo for Open-Reasoner-Zero☆2,084Updated 6 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,212Updated last week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆686Updated 11 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- A project to improve skills of large language models☆715Updated this week
- Minimalistic large language model 3D-parallelism training☆2,381Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆607Updated 4 months ago
- Large Reasoning Models☆806Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,785Updated 9 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆926Updated last year
- ☆556Updated last year
- ☆969Updated 11 months ago