idanshen / Value-Augmented-SamplingLinks
☆20Updated last year
Alternatives and similar repositories for Value-Augmented-Sampling
Users that are interested in Value-Augmented-Sampling are comparing it to the libraries listed below
Sorting:
- Directional Preference Alignment☆58Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- ☆51Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated last year
- ☆13Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- ☆103Updated 2 years ago
- Benchmarking Benchmark Leakage in Large Language Models☆58Updated last year
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆27Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 3 months ago
- ☆72Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 9 months ago
- Lightweight Adapting for Black-Box Large Language Models☆25Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆53Updated 8 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- ☆58Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆51Updated 8 months ago
- The source code for running LLMs on the AAAR-1.0 benchmark.☆18Updated 10 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆53Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆34Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated 2 years ago