bradhilton / o1-chain-of-thought
o1 Chain of Thought Examples
☆31Updated 3 months ago
Alternatives and similar repositories for o1-chain-of-thought:
Users that are interested in o1-chain-of-thought are comparing it to the libraries listed below
- Toy implementation of Strawberry☆30Updated 3 months ago
- ☆83Updated last week
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- ☆90Updated this week
- ☆93Updated 6 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆51Updated 9 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆101Updated this week
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆57Updated this week
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆66Updated 2 weeks ago
- Natural Language Reinforcement Learning☆68Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆88Updated 3 months ago
- Critique-out-Loud Reward Models☆47Updated 3 months ago
- ☆25Updated 8 months ago
- NeurIPS 2024 tutorial on LLM Inference☆37Updated last month
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆48Updated 5 months ago
- Replicating O1 inference-time scaling laws☆70Updated last month
- 🌾 OAT: Online AlignmenT for LLMs☆81Updated 3 weeks ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated 10 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆119Updated 5 months ago
- Reproducible, flexible LLM evaluations☆118Updated last month
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆34Updated 5 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- ☆98Updated last month
- Code implementation of synthetic continued pretraining☆79Updated 2 weeks ago
- ☆50Updated 2 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆66Updated 6 months ago
- ☆79Updated 3 months ago
- A repository for research on medium sized language models.☆76Updated 7 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆27Updated 7 months ago
- ☆126Updated last month