SeanLeng1 / Reward-CalibrationLinks
☆17Updated 8 months ago
Alternatives and similar repositories for Reward-Calibration
Users that are interested in Reward-Calibration are comparing it to the libraries listed below
Sorting:
- instruction-following benchmark for large reasoning models☆40Updated 3 weeks ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆166Updated 2 months ago
- ☆41Updated 4 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆72Updated 9 months ago
- ☆86Updated 7 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated last month
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆83Updated 4 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆41Updated 6 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆65Updated last month
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 11 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆81Updated 7 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆71Updated 2 months ago
- ☆56Updated 2 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆167Updated last month
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆77Updated 5 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆25Updated 9 months ago
- ☆59Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆44Updated last month
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- The official repository of the Omni-MATH benchmark.☆87Updated 8 months ago
- ☆128Updated 2 weeks ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆20Updated 3 weeks ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆113Updated 8 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆39Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆110Updated 3 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆52Updated 3 months ago
- ☆22Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated last month