princeton-nlp / unintentional-unalignmentLinks
[ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
☆29Updated 6 months ago
Alternatives and similar repositories for unintentional-unalignment
Users that are interested in unintentional-unalignment are comparing it to the libraries listed below
Sorting:
- ☆43Updated last year
- ☆29Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- ☆56Updated 2 months ago
- ☆51Updated last year
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆35Updated 2 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆26Updated 11 months ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated last year
- Directional Preference Alignment☆59Updated 10 months ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆78Updated 7 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆62Updated 8 months ago
- ☆96Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆108Updated 2 years ago
- ☆51Updated 3 months ago
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 10 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 10 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆125Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- ☆52Updated 2 years ago
- ☆14Updated 3 weeks ago
- ☆45Updated 2 years ago
- ☆34Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆76Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year