zwhong714 / weak-to-strong-preference-optimizationLinks
[ICLR 2025 Spotlight] Weak-to-strong preference optimization: stealing reward from weak aligned model
☆13Updated 5 months ago
Alternatives and similar repositories for weak-to-strong-preference-optimization
Users that are interested in weak-to-strong-preference-optimization are comparing it to the libraries listed below
Sorting:
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆82Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆78Updated 2 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆41Updated 2 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆184Updated 7 months ago
- ☆65Updated 4 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆25Updated 5 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆79Updated 6 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆171Updated last month
- The official repository of paper "AdaR1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆18Updated 3 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆68Updated 4 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆287Updated last month
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 3 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 8 months ago
- A comprehensive collection of process reward models.☆100Updated 3 weeks ago
- Extrapolating RLVR to General Domains without Verifiers☆136Updated 2 weeks ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆127Updated 3 months ago
- ☆34Updated 10 months ago
- ☆31Updated this week
- Test-time preferenece optimization (ICML 2025).☆158Updated 3 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆81Updated 2 months ago
- ☆48Updated 9 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆89Updated last month
- The official repository of the Omni-MATH benchmark.☆87Updated 7 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆40Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆126Updated 9 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆59Updated 3 weeks ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆29Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆149Updated last week