ruizheng20 / gpoLinks
The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".
☆17Updated last year
Alternatives and similar repositories for gpo
Users that are interested in gpo are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 9 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated 11 months ago
- ☆102Updated 2 years ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Rewarded soups official implementation☆62Updated 2 years ago
- ☆52Updated 8 months ago
- ☆46Updated 9 months ago
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Updated 11 months ago
- ☆19Updated last year
- ☆104Updated last year
- ☆51Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 8 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆48Updated 7 months ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆118Updated 10 months ago
- ☆46Updated 2 years ago
- ☆33Updated last year
- ☆43Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆115Updated 2 years ago