yafuly / TPOLinks
Test-time preferenece optimization (ICML 2025).
☆168Updated 5 months ago
Alternatives and similar repositories for TPO
Users that are interested in TPO are comparing it to the libraries listed below
Sorting:
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 5 months ago
- ☆157Updated 2 weeks ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆135Updated 6 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆93Updated 8 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆247Updated 3 weeks ago
- 🔧Tool-Star: Empowering LLM-brained Multi-Tool Reasoner via Reinforcement Learning☆270Updated last week
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆155Updated last week
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆112Updated 6 months ago
- Extrapolating RLVR to General Domains without Verifiers☆174Updated 2 months ago
- ☆211Updated 8 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆158Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 3 months ago
- ☆333Updated 2 months ago
- ☆133Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆83Updated 4 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆140Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆185Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆125Updated 5 months ago
- ☆300Updated 5 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆82Updated 7 months ago
- ☆208Updated 4 months ago
- ☆169Updated 5 months ago
- ☆68Updated 4 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆130Updated 7 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆348Updated 3 weeks ago
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 9 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆131Updated 11 months ago
- Official Implementation of ARPO: End-to-End Policy Optimization for GUI Agents with Experience Replay☆130Updated 4 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆77Updated last month