JerryYin777 / Cross-Layer-AttentionLinks
Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)
☆18Updated last year
Alternatives and similar repositories for Cross-Layer-Attention
Users that are interested in Cross-Layer-Attention are comparing it to the libraries listed below
Sorting:
- [ICML 2025 Oral] Mixture of Lookup Experts☆53Updated 5 months ago
- WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models☆22Updated 3 months ago
- Open-Pandora: On-the-fly Control Video Generation☆35Updated 11 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆94Updated 11 months ago
- ☆30Updated last month
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆13Updated 9 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆61Updated 8 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆39Updated 8 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- ☆95Updated 8 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆38Updated last year
- ☆61Updated 4 months ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 9 months ago
- VideoNSA: Native Sparse Attention Scales Video Understanding☆54Updated last week
- ☆25Updated 7 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆44Updated last week
- ☆105Updated last month
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆29Updated last month
- Are gradient information useful for pruning of LLMs?☆47Updated 2 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 2 months ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆212Updated 2 months ago
- ☆130Updated 5 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆89Updated 11 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆131Updated this week
- ☆32Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆42Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago