YangLing0818 / SuperCorrect-llmLinks
[ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction
☆76Updated 4 months ago
Alternatives and similar repositories for SuperCorrect-llm
Users that are interested in SuperCorrect-llm are comparing it to the libraries listed below
Sorting:
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆159Updated last week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆125Updated 4 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 6 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆105Updated 2 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆169Updated 3 weeks ago
- ☆126Updated 2 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆156Updated last month
- ☆117Updated 4 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 3 months ago
- ☆80Updated 2 weeks ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 7 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 8 months ago
- ☆67Updated last month
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆53Updated 5 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆118Updated last month
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 2 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- ☆59Updated 11 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆110Updated 7 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆100Updated 3 weeks ago
- The official repository of the Omni-MATH benchmark.☆85Updated 7 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆234Updated 2 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆160Updated 4 months ago
- Process Reward Models That Think☆47Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆58Updated 2 weeks ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆60Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆166Updated 2 months ago