xiwenc1 / DRA-GRPOLinks
Official code for the paper: DRA-GRPO: Exploring Diversity-Aware Reward Adjustment for R1-Zero-Like Training of Large Language Models
☆19Updated last month
Alternatives and similar repositories for DRA-GRPO
Users that are interested in DRA-GRPO are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆81Updated last month
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 3 weeks ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated 11 months ago
- ☆49Updated last month
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆79Updated 2 months ago
- ☆255Updated last month
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆35Updated last month
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 8 months ago
- ☆118Updated 5 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆78Updated 5 months ago
- ☆156Updated 2 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆47Updated 9 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆82Updated 6 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆34Updated last month
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆25Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 4 months ago
- ☆24Updated 3 months ago
- ☆323Updated 2 weeks ago
- One-shot Entropy Minimization☆175Updated 2 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆85Updated last month
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆282Updated last month
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆70Updated 3 weeks ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 10 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆77Updated last month
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆39Updated 4 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆29Updated last year
- Survey on Data-centric Large Language Models☆84Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- Official Repository of "Learning what reinforcement learning can't"☆57Updated this week
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago