sail-sg / VeriFreeLinks
Reinforcing General Reasoning without Verifiers
☆81Updated 2 months ago
Alternatives and similar repositories for VeriFree
Users that are interested in VeriFree are comparing it to the libraries listed below
Sorting:
- Code for "Reasoning to Learn from Latent Thoughts"☆118Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated 2 weeks ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 4 months ago
- ☆20Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆105Updated last month
- A repo for open research on building large reasoning models☆94Updated this week
- ☆34Updated 8 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆61Updated 7 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 6 months ago
- ☆47Updated 7 months ago
- ☆116Updated 7 months ago
- ☆51Updated 2 months ago
- ☆93Updated 4 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆113Updated 2 weeks ago
- ☆100Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆45Updated last month
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆172Updated last month
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated 7 months ago
- SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning☆139Updated last week
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆83Updated 3 months ago
- A Sober Look at Language Model Reasoning☆82Updated 2 months ago
- Directional Preference Alignment☆59Updated 11 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆98Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- ☆104Updated 11 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆29Updated 9 months ago
- Replicating O1 inference-time scaling laws☆89Updated 9 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆110Updated 4 months ago