BY571 / SCoReLinks
SCoRe: Training Language Models to Self-Correct via Reinforcement Learning
☆12Updated 8 months ago
Alternatives and similar repositories for SCoRe
Users that are interested in SCoRe are comparing it to the libraries listed below
Sorting:
- ☆18Updated 2 months ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆125Updated last month
- ☆53Updated 8 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆51Updated last year
- ☆50Updated 8 months ago
- ☆22Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 7 months ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆74Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆107Updated 2 months ago
- ☆20Updated last year
- Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF☆23Updated last year
- exploring whether LLMs perform case-based or rule-based reasoning☆29Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Directional Preference Alignment☆57Updated last year
- ☆22Updated 4 months ago
- Extensive Self-Contrast Enables Feedback-Free Language Model Alignment☆20Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated 2 months ago
- Sotopia-RL: Reward Design for Social Intelligence☆43Updated last month
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- ☆20Updated last year
- ☆45Updated 2 weeks ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆101Updated last year
- The repository contains code for Adaptive Data Optimization☆26Updated 10 months ago