princeton-nlp / unintentional-unalignmentLinks
[ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
☆29Updated 5 months ago
Alternatives and similar repositories for unintentional-unalignment
Users that are interested in unintentional-unalignment are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- ☆49Updated last year
- ☆29Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆25Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 2 months ago
- Directional Preference Alignment☆57Updated 9 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 5 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- ☆74Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆39Updated last month
- ☆27Updated 2 years ago
- ☆51Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆95Updated last year
- ☆13Updated 6 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆24Updated 7 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆32Updated 9 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- Rewarded soups official implementation☆58Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆72Updated 2 years ago
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆12Updated 7 months ago
- ☆25Updated last year
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆32Updated 2 months ago
- ☆44Updated 2 years ago
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 9 months ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated 11 months ago