ruizheng20 / gpoLinks
The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".
☆17Updated last year
Alternatives and similar repositories for gpo
Users that are interested in gpo are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 9 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- ☆39Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated 11 months ago
- ☆32Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆65Updated 2 years ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- ☆102Updated 2 years ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Updated last year
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- ☆46Updated 9 months ago
- Augmenting Statistical Models with Natural Language Parameters☆29Updated last year
- ☆41Updated 2 years ago
- ☆52Updated 8 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆48Updated 7 months ago
- Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".☆43Updated last year
- ☆29Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 7 months ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆14Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆115Updated 2 years ago