GuanghaoYe / Emergence-of-ThinkingLinks
☆52Updated 4 months ago
Alternatives and similar repositories for Emergence-of-Thinking
Users that are interested in Emergence-of-Thinking are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆74Updated last week
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆65Updated 2 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆57Updated 4 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆59Updated 6 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆47Updated 2 weeks ago
- RL Scaling and Test-Time Scaling (ICML'25)☆106Updated 5 months ago
- Resources for the Enigmata Project.☆40Updated 2 weeks ago
- ☆33Updated 9 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆99Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆141Updated 4 months ago
- This the implementation of LeCo☆31Updated 5 months ago
- ☆68Updated 3 months ago
- Revisiting Mid-training in the Era of RL Scaling☆62Updated 2 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆32Updated 9 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆108Updated 6 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆62Updated 2 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆61Updated 6 months ago
- ☆71Updated 7 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆159Updated 3 weeks ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆54Updated 7 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆153Updated 3 months ago
- ☆59Updated 9 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆72Updated 3 months ago
- The code and data for the paper JiuZhang3.0☆47Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year