TianHongZXY / RLVR-DecomposedLinks
Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"
☆77Updated last week
Alternatives and similar repositories for RLVR-Decomposed
Users that are interested in RLVR-Decomposed are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆80Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆56Updated this week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 7 months ago
- ☆67Updated last year
- ☆46Updated 8 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆165Updated last month
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆63Updated last month
- ☆59Updated 10 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆73Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆178Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆60Updated 8 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆61Updated this week
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆39Updated last month
- Large Language Models Can Self-Improve in Long-context Reasoning☆71Updated 7 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 11 months ago
- ☆202Updated 3 months ago
- ☆74Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆74Updated last month
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆180Updated 6 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆113Updated 3 weeks ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- ☆113Updated 4 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆61Updated 7 months ago