Raj-08 / Q-FlowLinks
Complete Reinforcement Learning Toolkit for Large Language Models!
☆21Updated 6 months ago
Alternatives and similar repositories for Q-Flow
Users that are interested in Q-Flow are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- Natural Language Reinforcement Learning☆101Updated 6 months ago
- ☆32Updated last year
- ☆20Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated last year
- ☆160Updated last year
- ☆108Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆115Updated 2 years ago
- ☆117Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated last year
- ☆70Updated last year
- MUA-RL: MULTI-TURN USER-INTERACTING AGENT REINFORCEMENT LEARNING FOR AGENTIC TOOL USE☆56Updated 3 months ago
- o1 Chain of Thought Examples☆33Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆51Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 10 months ago
- ☆77Updated 3 months ago
- ☆34Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- RLHF experiments on a single A100 40G GPU. Support PPO, GRPO, REINFORCE, RAFT, RLOO, ReMax, DeepSeek R1-Zero reproducing.☆79Updated 11 months ago
- Process Reward Models That Think☆78Updated 2 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆155Updated last year
- Regressing the Relative Future: Efficient Policy Optimization for Multi-turn RLHF☆24Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated last year
- [EMNLP 2025] CompassVerifier: A Unified and Robust Verifier for LLMs Evaluation and Outcome Reward☆63Updated 6 months ago
- ☆52Updated last year
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆149Updated 4 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆53Updated last year