JerryYin777 / Cross-Layer-Attention
Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)
☆12Updated 8 months ago
Alternatives and similar repositories for Cross-Layer-Attention:
Users that are interested in Cross-Layer-Attention are comparing it to the libraries listed below
- Open-Pandora: On-the-fly Control Video Generation☆32Updated 2 months ago
- Triton implement of bi-directional (non-causal) linear attention☆41Updated 2 weeks ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 8 months ago
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆25Updated 9 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆80Updated 3 months ago
- Efficient Mixture of Experts for LLM Paper List☆36Updated 2 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆40Updated 3 months ago
- BESA is a differentiable weight pruning technique for large language models.☆14Updated 11 months ago
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 4 months ago
- ☆99Updated 11 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 8 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 10 months ago
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆38Updated last year
- Mixture of Attention Heads☆41Updated 2 years ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆18Updated 8 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆14Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆43Updated 2 weeks ago
- ☆30Updated 8 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆18Updated 3 months ago
- FocusLLM: Scaling LLM’s Context by Parallel Decoding☆36Updated 2 months ago
- This repo contains code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation"☆11Updated last month
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆34Updated 2 months ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- The official repo of continuous speculative decoding☆24Updated 3 months ago
- differentiable top-k operator☆21Updated last month
- ☆33Updated 3 months ago
- ☆32Updated last month