princeton-nlp / unintentional-unalignmentLinks
[ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
☆32Updated 11 months ago
Alternatives and similar repositories for unintentional-unalignment
Users that are interested in unintentional-unalignment are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- ☆13Updated 5 months ago
- ☆46Updated last year
- ☆51Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆31Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆48Updated 7 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated 11 months ago
- ☆46Updated 2 years ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Learning adapter weights from task descriptions☆19Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 9 months ago
- ☆52Updated 8 months ago
- ☆103Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Updated last year
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆13Updated last year
- Directional Preference Alignment☆58Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆42Updated 7 months ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated last year
- Augmenting Statistical Models with Natural Language Parameters☆29Updated last year
- ☆57Updated 7 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆137Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆85Updated 2 weeks ago