zwhong714 / weak-to-strong-preference-optimizationLinks
[ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model
☆13Updated 4 months ago
Alternatives and similar repositories for weak-to-strong-preference-optimization
Users that are interested in weak-to-strong-preference-optimization are comparing it to the libraries listed below
Sorting:
- ☆65Updated 3 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆73Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆77Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆134Updated this week
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆180Updated 6 months ago
- ☆205Updated 4 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 5 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆85Updated 4 months ago
- ☆33Updated 9 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆38Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆124Updated 8 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆30Updated 3 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆166Updated 2 weeks ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆79Updated last month
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆63Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆68Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆129Updated this week
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆57Updated 6 months ago
- A comprehensive collection of process reward models.☆95Updated 3 weeks ago
- Implementation code for ACL2024:Advancing Parameter Efficiency in Fine-tuning via Representation Editing☆14Updated last year
- Test-time preferenece optimization (ICML 2025).☆147Updated 2 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated 7 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆282Updated last week
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆251Updated last week
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆67Updated 2 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆49Updated last month
- The official repository of the Omni-MATH benchmark.☆85Updated 6 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆77Updated last week