locuslab / massive-activationsLinks
Code accompanying the paper "Massive Activations in Large Language Models"
☆162Updated last year
Alternatives and similar repositories for massive-activations
Users that are interested in massive-activations are comparing it to the libraries listed below
Sorting:
- ☆179Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆55Updated 8 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆89Updated last year
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆84Updated 11 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆195Updated 10 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆161Updated 11 months ago
- A Sober Look at Language Model Reasoning☆52Updated last week
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆145Updated 2 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆84Updated 7 months ago
- ☆174Updated last month
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆67Updated 7 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated last year
- ☆92Updated 8 months ago
- ☆29Updated last year
- ☆125Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆78Updated 3 weeks ago
- ☆67Updated 3 years ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆38Updated 11 months ago
- ☆258Updated last year
- ☆83Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 3 months ago
- ☆94Updated last year
- ☆18Updated 6 months ago
- ☆49Updated last year
- ☆79Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 3 months ago
- ☆54Updated 5 months ago