bradhilton / o1-chain-of-thought
o1 Chain of Thought Examples
☆33Updated 7 months ago
Alternatives and similar repositories for o1-chain-of-thought
Users that are interested in o1-chain-of-thought are comparing it to the libraries listed below
Sorting:
- ☆65Updated 2 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆57Updated 3 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆95Updated last week
- Code for "Reasoning to Learn from Latent Thoughts"☆94Updated last month
- Toy implementation of Strawberry☆31Updated 7 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆52Updated 2 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆146Updated 3 weeks ago
- Replicating O1 inference-time scaling laws☆85Updated 5 months ago
- ☆110Updated 3 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆55Updated 7 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- Critique-out-Loud Reward Models☆64Updated 6 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆59Updated last month
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 6 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆47Updated 4 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month
- ☆70Updated last week
- Long Context Extension and Generalization in LLMs☆55Updated 7 months ago
- Revisiting Mid-training in the Era of RL Scaling☆37Updated 3 weeks ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 10 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated 2 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆82Updated last week
- ☆50Updated 3 months ago
- ☆25Updated 7 months ago
- ☆63Updated last week
- ☆47Updated 5 months ago