BY571 / SCoReLinks
SCoRe: Training Language Models to Self-Correct via Reinforcement Learning
☆15Updated last year
Alternatives and similar repositories for SCoRe
Users that are interested in SCoRe are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆149Updated 4 months ago
- ☆53Updated 11 months ago
- ☆90Updated 3 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- ☆52Updated 11 months ago
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 9 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year
- ☆17Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆51Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 10 months ago
- Extensive Self-Contrast Enables Feedback-Free Language Model Alignment☆21Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated last year
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated last year
- ☆51Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- Natural Language Reinforcement Learning☆101Updated 6 months ago
- e☆43Updated 9 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 8 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆53Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆51Updated 8 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆28Updated last year
- Reinforcing General Reasoning without Verifiers☆96Updated 7 months ago
- Process Reward Models That Think☆78Updated 2 months ago
- ☆76Updated 3 months ago
- ☆72Updated 7 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆65Updated last year
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆96Updated 10 months ago