ruizheng20 / gpo
The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".
☆16Updated 8 months ago
Alternatives and similar repositories for gpo:
Users that are interested in gpo are comparing it to the libraries listed below
- ☆36Updated last year
- ☆50Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆64Updated 3 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆35Updated 2 weeks ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- ☆30Updated 10 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆19Updated 4 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated last month
- AbstainQA, ACL 2024☆25Updated 4 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆15Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 4 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated last month
- Learning adapter weights from task descriptions☆16Updated last year
- Directional Preference Alignment☆56Updated 5 months ago
- Lightweight Adapting for Black-Box Large Language Models☆20Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆38Updated 8 months ago
- In-context Example Selection with Influences☆15Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆58Updated 2 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- ☆45Updated 6 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆57Updated 2 months ago
- ☆30Updated 5 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- [NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.☆22Updated last year
- Rewarded soups official implementation☆54Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆24Updated 6 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆64Updated last year
- ☆37Updated last year
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆15Updated 6 months ago