tml-epfl / long-is-more-for-alignment
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]
☆15Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for long-is-more-for-alignment
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆28Updated 4 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆25Updated 5 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆57Updated 2 weeks ago
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 7 months ago
- ☆20Updated 5 months ago
- ☆24Updated last year
- ☆15Updated 8 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆15Updated 8 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆69Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- ☆33Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆58Updated 8 months ago
- ☆63Updated 2 years ago
- ☆33Updated 9 months ago
- Directional Preference Alignment☆51Updated 2 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆22Updated 3 months ago
- ☆26Updated 6 months ago
- ☆81Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆25Updated 3 weeks ago
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆15Updated 4 months ago
- ☆49Updated last year
- ☆44Updated 10 months ago
- ☆21Updated last month
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆24Updated 2 months ago
- ☆24Updated 6 months ago
- ☆36Updated 3 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆15Updated 5 months ago
- [ATTRIB @ NeurIPS 2024 Oral] When Attention Sink Emerges in Language Models: An Empirical View☆29Updated last month
- ☆19Updated last week
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆33Updated 3 weeks ago