sail-sg / Attention-Sink
[ATTRIB @ NeurIPS 2024] When Attention Sink Emerges in Language Models: An Empirical View
☆27Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for Attention-Sink
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- ☆33Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆63Updated 9 months ago
- [SafeGenAi @ NeurIPS 2024] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates☆59Updated 2 weeks ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- ☆27Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆83Updated 5 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆21Updated 4 months ago
- ☆15Updated last week
- DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆56Updated 2 weeks ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆14Updated 6 months ago
- Codebase for decoding compressed trust.☆20Updated 6 months ago
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆23Updated 9 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆68Updated 3 weeks ago
- ☆19Updated last month
- ☆31Updated last year
- Long Context Extension and Generalization in LLMs☆39Updated last month
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆34Updated 2 weeks ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆67Updated last month
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆29Updated 2 weeks ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆32Updated 3 weeks ago
- ☆44Updated 10 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 7 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆26Updated 4 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆30Updated 3 weeks ago
- ☆34Updated 10 months ago
- ☆20Updated 4 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆15Updated 5 months ago
- A Survey on the Honesty of Large Language Models☆44Updated last month
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆58Updated 11 months ago