GuoTianYu2000 / Active-Dormant-AttentionLinks
codes and plots for "Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs"
☆10Updated 11 months ago
Alternatives and similar repositories for Active-Dormant-Attention
Users that are interested in Active-Dormant-Attention are comparing it to the libraries listed below
Sorting:
- Unofficial Implementation of Selective Attention Transformer☆18Updated last year
- ☆35Updated last year
- ☆19Updated 9 months ago
- Stick-breaking attention☆62Updated 5 months ago
- ☆20Updated last month
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆25Updated 10 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆38Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆17Updated last year
- Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure (NeurIPS 2024) + Arithmetic Transfor…☆11Updated 2 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆151Updated 5 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Updated 9 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆84Updated 5 months ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆26Updated 2 weeks ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆31Updated 3 months ago
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆18Updated last year
- ☆101Updated 10 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 2 months ago
- Flash Attention in 300-500 lines of CUDA/C++☆36Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆127Updated 6 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- ☆43Updated last month
- ☆36Updated 9 months ago
- ☆33Updated 2 years ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated 2 months ago
- ☆10Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 8 months ago
- Official Pytorch Implementation of "The Curse of Depth in Large Language Models" by Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin,Yefen…☆64Updated 3 weeks ago
- Benchmarking Optimizers for LLM Pretraining☆47Updated last week
- ☆30Updated 2 years ago