YangLing0818 / SuperCorrect-llmLinks
[ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction
☆72Updated 2 months ago
Alternatives and similar repositories for SuperCorrect-llm
Users that are interested in SuperCorrect-llm are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆98Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆75Updated 2 weeks ago
- ☆61Updated this week
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 4 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆153Updated 3 months ago
- The official repository of the Omni-MATH benchmark.☆84Updated 6 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆157Updated 2 weeks ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆108Updated 2 weeks ago
- RL Scaling and Test-Time Scaling (ICML'25)☆105Updated 5 months ago
- ☆107Updated 3 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆60Updated last month
- A Sober Look at Language Model Reasoning☆71Updated last week
- Repo for "Z1: Efficient Test-time Scaling with Code"☆60Updated 2 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆141Updated last week
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆37Updated 3 months ago
- ☆112Updated 3 weeks ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆59Updated 6 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆220Updated last month
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆65Updated 2 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆70Updated 6 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆191Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆72Updated 4 months ago
- [ACL'25] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆64Updated this week
- ☆46Updated 7 months ago
- FastCuRL: Curriculum Reinforcement Learning with Stage-wise Context Scaling for Efficient LLM Reasoning☆52Updated 2 weeks ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 5 months ago