yafuly / TPO
Test-time preferenece optimization.
☆114Updated this week
Alternatives and similar repositories for TPO
Users that are interested in TPO are comparing it to the libraries listed below
Sorting:
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆94Updated 3 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆144Updated 3 weeks ago
- The official code repository for PRMBench.☆73Updated 2 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆76Updated 3 months ago
- Reformatted Alignment☆115Updated 7 months ago
- ☆45Updated last month
- On Memorization of Large Language Models in Logical Reasoning☆64Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆119Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆119Updated last month
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆60Updated 4 months ago
- The official repository of the Omni-MATH benchmark.☆83Updated 4 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆70Updated 2 months ago
- ☆55Updated 6 months ago
- ☆49Updated last year
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆56Updated last year
- ☆163Updated this week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆198Updated last week
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆138Updated 2 months ago
- This the implementation of LeCo☆31Updated 3 months ago
- ☆153Updated last month
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆72Updated 2 weeks ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆69Updated 5 months ago
- From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation☆89Updated this week
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆62Updated 6 months ago
- The demo, code and data of FollowRAG☆72Updated 2 weeks ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 11 months ago
- ☆56Updated last week
- Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models☆105Updated last week
- A Survey on the Honesty of Large Language Models☆57Updated 5 months ago