srush / awesome-o1Links
A bibliography and survey of the papers surrounding o1
☆1,207Updated 9 months ago
Alternatives and similar repositories for awesome-o1
Users that are interested in awesome-o1 are comparing it to the libraries listed below
Sorting:
- Recipes to scale inference-time compute of open models☆1,112Updated 3 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,249Updated 2 weeks ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆880Updated last month
- procedural reasoning datasets☆1,060Updated last week
- ☆893Updated last month
- System 2 Reasoning Link Collection☆852Updated 5 months ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,068Updated last month
- SkyRL: A Modular Full-stack RL Library for LLMs☆738Updated last week
- ☆1,033Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆628Updated 2 months ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆520Updated last month
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆438Updated this week
- Scalable toolkit for efficient model alignment☆837Updated 3 weeks ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆913Updated 6 months ago
- ☆621Updated last month
- O1 Replication Journey☆1,998Updated 7 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆660Updated 7 months ago
- Large Reasoning Models☆805Updated 8 months ago
- Automatic evals for LLMs☆519Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,841Updated this week
- Code for Quiet-STaR☆738Updated last year
- ☆955Updated 7 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,006Updated 3 weeks ago
- A project to improve skills of large language models☆529Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆744Updated 11 months ago
- Minimalistic large language model 3D-parallelism training☆2,150Updated last month
- ☆508Updated last year
- AllenAI's post-training codebase☆3,124Updated this week
- Verifiers for LLM Reinforcement Learning☆1,780Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆845Updated 5 months ago