mit-han-lab / duo-attention
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
☆348Updated last week
Related projects ⓘ
Alternatives and complementary repositories for duo-attention
- ☆183Updated 6 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆351Updated this week
- KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆240Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆303Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆279Updated this week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆195Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆277Updated 4 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆134Updated 4 months ago
- Ring attention implementation with flash attention☆578Updated this week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆281Updated 3 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆434Updated this week
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆196Updated 2 months ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆161Updated this week
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆94Updated last month
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆146Updated 4 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆222Updated last month
- scalable and robust tree-based speculative decoding algorithm☆314Updated 3 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆369Updated last month
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆184Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆644Updated last month
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆387Updated 3 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆353Updated last week
- ☆182Updated 3 weeks ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆183Updated 2 weeks ago
- ☆284Updated 7 months ago
- ☆197Updated 5 months ago
- ☆95Updated last month
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆174Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆436Updated last week
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆70Updated last week