GAIR-NLP / O1-Journey
O1 Replication Journey: A Strategic Progress Report – Part I
☆1,861Updated this week
Alternatives and similar repositories for O1-Journey:
Users that are interested in O1-Journey are comparing it to the libraries listed below
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,448Updated 3 weeks ago
- Large Reasoning Models☆787Updated last month
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆3,761Updated this week
- ☆1,137Updated last month
- ☆812Updated last week
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,166Updated this week
- An Open Large Reasoning Model for Real-World Solutions☆1,378Updated last month
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆521Updated 2 weeks ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆800Updated 2 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆908Updated last month
- Scalable RL solution for advanced reasoning of language models☆873Updated this week
- A bibliography and survey of the papers surrounding o1☆1,042Updated 2 months ago
- ☆432Updated 2 weeks ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆732Updated 2 months ago
- ☆2,289Updated this week
- ☆996Updated last month
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,083Updated last year
- A library for advanced large language model reasoning☆1,659Updated this week
- veRL: Volcano Engine Reinforcement Learning for LLM☆690Updated this week
- Recipes to train reward model for RLHF.☆1,084Updated last month
- Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought and OpenAI o1 🍓☆2,269Updated last month
- ☆902Updated 6 months ago
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆977Updated 3 months ago
- Recipes to scale inference-time compute of open models☆932Updated this week
- Reference implementation for DPO (Direct Preference Optimization)☆2,323Updated 5 months ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆565Updated last week
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆966Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆687Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆785Updated 2 weeks ago
- Code for Quiet-STaR☆698Updated 4 months ago