mit-han-lab / duo-attention
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
β418Updated last week
Alternatives and similar repositories for duo-attention:
Users that are interested in duo-attention are comparing it to the libraries listed below
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ269Updated last week
- π° Must-read papers on KV Cache Compression (constantly updating π€).β276Updated last week
- [ICML 2024] CLLMs: Consistency Large Language Modelsβ368Updated 2 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantizationβ326Updated 5 months ago
- Efficient LLM Inference over Long Sequencesβ349Updated last month
- β216Updated 8 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Servingβ492Updated this week
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β321Updated 2 months ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>β111Updated last month
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ235Updated 2 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ414Updated 3 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ291Updated 6 months ago
- OLMoE: Open Mixture-of-Experts Language Modelsβ536Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"β145Updated 7 months ago
- Ring attention implementation with flash attentionβ658Updated last month
- LLM KV cache compression made easyβ349Updated this week
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)β218Updated 3 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformersβ204Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β694Updated 4 months ago
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memβ¦β323Updated 9 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β419Updated 5 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..β175Updated 2 weeks ago
- A family of compressed models obtained via pruning and knowledge distillationβ313Updated 2 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024β190Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"β380Updated 3 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ220Updated last week
- Explorations into some recent techniques surrounding speculative decodingβ231Updated last month
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which rβ¦β890Updated this week
- β311Updated 9 months ago
- KV cache compression for high-throughput LLM inferenceβ105Updated this week