xiwenc1 / DRA-GRPOLinks
Official code for the paper: DRA-GRPO: Exploring Diversity-Aware Reward Adjustment for R1-Zero-Like Training of Large Language Models
☆20Updated 3 months ago
Alternatives and similar repositories for DRA-GRPO
Users that are interested in DRA-GRPO are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 2 months ago
- A Sober Look at Language Model Reasoning☆83Updated last week
- ☆167Updated 4 months ago
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆35Updated last week
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆37Updated this week
- ☆269Updated 2 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆87Updated last year
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 10 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆95Updated 9 months ago
- One-shot Entropy Minimization☆185Updated 3 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆48Updated 11 months ago
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆25Updated 2 months ago
- ☆332Updated last month
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 2 months ago
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆39Updated 2 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆81Updated 3 months ago
- ☆125Updated 6 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 6 months ago
- Survey on Data-centric Large Language Models☆84Updated last year
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 5 months ago
- ☆50Updated 2 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆133Updated 5 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆84Updated 7 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆80Updated 3 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 7 months ago
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆85Updated 10 months ago
- ☆106Updated 3 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆46Updated 3 months ago