hughbzhang / o1_inference_scaling_lawsLinks
Replicating O1 inference-time scaling laws
☆90Updated last year
Alternatives and similar repositories for o1_inference_scaling_laws
Users that are interested in o1_inference_scaling_laws are comparing it to the libraries listed below
Sorting:
- ☆75Updated last year
- ☆109Updated last year
- ☆125Updated 9 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- ☆100Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆68Updated 9 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆57Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆200Updated 7 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- Exploration of automated dataset selection approaches at large scales.☆50Updated 9 months ago
- ☆78Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 9 months ago
- ☆41Updated 8 months ago
- ☆89Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 10 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆118Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 10 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 5 months ago
- A repository for research on medium sized language models.☆78Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- ☆106Updated 7 months ago
- ☆105Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆143Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- Can Language Models Solve Olympiad Programming?☆122Updated 10 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago