GATECH-EIC / ShiftAddLLMLinks
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
☆109Updated 8 months ago
Alternatives and similar repositories for ShiftAddLLM
Users that are interested in ShiftAddLLM are comparing it to the libraries listed below
Sorting:
- LLM Inference with Microscaling Format☆23Updated 7 months ago
- ☆60Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆163Updated 11 months ago
- ☆59Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆108Updated 2 months ago
- ☆51Updated 11 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆79Updated 9 months ago
- ☆130Updated 4 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆137Updated last month
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆32Updated 10 months ago
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆80Updated 3 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆310Updated 11 months ago
- ☆31Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆124Updated 9 months ago
- PB-LLM: Partially Binarized Large Language Models☆152Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆63Updated last year
- ☆137Updated this week
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆115Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆123Updated 4 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆218Updated 5 months ago
- ☆75Updated 5 months ago
- ☆20Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆303Updated 5 months ago
- 16-fold memory access reduction with nearly no loss☆99Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆71Updated 8 months ago
- Work in progress.☆69Updated 2 weeks ago
- ☆151Updated 2 years ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆69Updated 11 months ago
- AFPQ code implementation☆21Updated last year