cmu-mind / RISELinks
☆31Updated 7 months ago
Alternatives and similar repositories for RISE
Users that are interested in RISE are comparing it to the libraries listed below
Sorting:
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- GenRM-CoT: Data release for verification rationales☆61Updated 8 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated 2 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆62Updated 2 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆37Updated last week
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 9 months ago
- ☆52Updated 4 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆65Updated 2 months ago
- ☆114Updated 5 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆80Updated 10 months ago
- Natural Language Reinforcement Learning☆89Updated 6 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 8 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆54Updated last year
- Rewarded soups official implementation☆58Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆54Updated 7 months ago
- Directional Preference Alignment☆57Updated 9 months ago
- ☆95Updated 11 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆30Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆59Updated 6 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆99Updated last month
- ☆71Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆141Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- Critique-out-Loud Reward Models☆66Updated 8 months ago
- ☆33Updated 4 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆95Updated 2 weeks ago