Raj-08 / Q-FlowLinks
Complete Reinforcement Learning Toolkit for Large Language Models!
☆20Updated 2 months ago
Alternatives and similar repositories for Q-Flow
Users that are interested in Q-Flow are comparing it to the libraries listed below
Sorting:
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 8 months ago
- ☆20Updated 11 months ago
- ☆101Updated last year
- Natural Language Reinforcement Learning☆98Updated 2 months ago
- ☆152Updated 10 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 8 months ago
- ☆67Updated last year
- ☆53Updated 8 months ago
- ☆33Updated 11 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆57Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆66Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 11 months ago
- ☆24Updated 6 months ago
- ☆29Updated last year
- o1 Chain of Thought Examples☆33Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆111Updated last year
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆125Updated last month
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 9 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆142Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- ☆18Updated last year
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆51Updated last year
- ☆34Updated last year
- ☆116Updated 8 months ago
- Directional Preference Alignment☆57Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- ☆68Updated 3 weeks ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year