JerryYin777 / Cross-Layer-AttentionLinks
Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)
☆16Updated last year
Alternatives and similar repositories for Cross-Layer-Attention
Users that are interested in Cross-Layer-Attention are comparing it to the libraries listed below
Sorting:
- Triton implement of bi-directional (non-causal) linear attention☆48Updated 4 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Updated 8 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆82Updated 6 months ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 6 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆72Updated this week
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆38Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 11 months ago
- [ICML 2025 Spotlight] Mixture of Lookup Experts☆24Updated 3 weeks ago
- Quantized Attention on GPU☆44Updated 6 months ago
- ICLR 2025☆26Updated 2 weeks ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆61Updated 2 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆21Updated 6 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆18Updated 2 weeks ago
- ☆56Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆41Updated 2 months ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆17Updated 2 weeks ago
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 6 months ago
- ☆47Updated 2 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 11 months ago
- Efficient Mixture of Experts for LLM Paper List☆68Updated 5 months ago
- ☆74Updated 3 months ago
- Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆34Updated last week
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 7 months ago
- differentiable top-k operator☆21Updated 5 months ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆50Updated 9 months ago
- LongSpec: Long-Context Speculative Decoding with Efficient Drafting and Verification☆53Updated 3 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆46Updated 7 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year