xiwenc1 / DRA-GRPOLinks
Official code for the paper: DRA-GRPO: Exploring Diversity-Aware Reward Adjustment for R1-Zero-Like Training of Large Language Models
☆20Updated 2 months ago
Alternatives and similar repositories for DRA-GRPO
Users that are interested in DRA-GRPO are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated last month
- A Sober Look at Language Model Reasoning☆81Updated 2 months ago
- ☆50Updated last month
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆86Updated last year
- [ICML 2025] "From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?"☆34Updated last month
- ☆163Updated 3 months ago
- Survey on Data-centric Large Language Models☆84Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆43Updated last month
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆35Updated 2 months ago
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆36Updated this week
- ☆120Updated 5 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆35Updated 2 weeks ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆79Updated 2 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 9 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆47Updated 10 months ago
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆84Updated 9 months ago
- One-shot Entropy Minimization☆180Updated 2 months ago
- ☆261Updated last month
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆22Updated last month
- Accepted LLM Papers in NeurIPS 2024☆37Updated 10 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆80Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 5 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆103Updated 3 months ago
- ☆328Updated last month
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆76Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆84Updated 6 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆19Updated 5 months ago
- [ACL2025 Best Paper] Language Models Resist Alignment☆23Updated 2 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆25Updated 6 months ago