jamqd / Group-Preference-Optimization
☆14Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for Group-Preference-Optimization
- ☆35Updated 9 months ago
- Rewarded soups official implementation☆50Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆52Updated last week
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆47Updated last week
- Lightweight Adapting for Black-Box Large Language Models☆18Updated 8 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆52Updated 2 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆14Updated last year
- ☆79Updated last year
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆29Updated 3 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆13Updated 2 weeks ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆58Updated last month
- ☆19Updated last month
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆52Updated last month
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆16Updated last year
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆15Updated 8 months ago
- 🌾 OAT: Online AlignmenT for LLMs☆27Updated this week
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆29Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆95Updated 2 months ago
- ☆26Updated 6 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆85Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆38Updated 9 months ago
- ☆15Updated 3 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆14Updated 6 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆15Updated 8 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆23Updated 10 months ago
- ☆20Updated 4 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆96Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆60Updated 7 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆19Updated 5 months ago
- Direct preference optimization with f-divergences.☆11Updated last week