MrBlankness / TPOLinks
Pytorch implementation of Tree Preference Optimization (TPO) (Accepyed by ICLR'25)
☆23Updated 5 months ago
Alternatives and similar repositories for TPO
Users that are interested in TPO are comparing it to the libraries listed below
Sorting:
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆62Updated 2 months ago
- ☆127Updated 6 months ago
- ☆169Updated 4 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆110Updated last month
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆164Updated last week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆128Updated 6 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆80Updated 9 months ago
- A Sober Look at Language Model Reasoning☆83Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 7 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆43Updated 10 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆186Updated 8 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆59Updated 10 months ago
- ☆207Updated 6 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆90Updated 7 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆333Updated 2 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 2 months ago
- ☆50Updated 11 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆137Updated 3 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 8 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆84Updated 4 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆54Updated 4 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆30Updated 7 months ago
- ☆67Updated 5 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 3 months ago
- ☆133Updated 3 weeks ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆82Updated 6 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- ☆155Updated 4 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆44Updated last year