enyac-group / Quamba
The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]
☆45Updated last month
Alternatives and similar repositories for Quamba
Users that are interested in Quamba are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆108Updated 2 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆106Updated 7 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆66Updated 6 months ago
- ☆54Updated 2 weeks ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆106Updated 3 weeks ago
- ☆128Updated 3 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆161Updated 10 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆24Updated 3 weeks ago
- 16-fold memory access reduction with nearly no loss☆93Updated last month
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆36Updated 7 months ago
- ☆29Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆47Updated 5 months ago
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆146Updated last month
- xKV: Cross-Layer SVD for KV-Cache Compression☆24Updated this week
- Work in progress.☆61Updated last month
- LLM Inference with Microscaling Format☆22Updated 6 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆100Updated 3 weeks ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆31Updated 9 months ago
- ☆41Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- ☆131Updated last month
- A sparse attention kernel supporting mix sparse patterns☆202Updated 3 months ago
- ☆37Updated 8 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆133Updated 3 months ago
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆67Updated last month
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 11 months ago
- Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆32Updated 3 weeks ago
- ☆42Updated 9 months ago
- Official Implementation of FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation☆19Updated last month