[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"
☆39Mar 11, 2024Updated 2 years ago
Alternatives and similar repositories for QLLM
Users that are interested in QLLM are comparing it to the libraries listed below
Sorting:
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated 2 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- Structured Binary Neural Networks for Image Recognition☆16Oct 12, 2022Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- ☆21Feb 5, 2024Updated 2 years ago
- Structured Binary Neural Networks for Image Recognition☆18Nov 18, 2021Updated 4 years ago
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆104Feb 26, 2024Updated 2 years ago
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆16Jun 18, 2023Updated 2 years ago
- ☆16Sep 12, 2023Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆408Aug 13, 2024Updated last year
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Dec 30, 2021Updated 4 years ago
- ☆18Oct 4, 2022Updated 3 years ago
- [CVPR 2025] APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers☆39Apr 7, 2025Updated 11 months ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Oct 7, 2021Updated 4 years ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Nov 15, 2022Updated 3 years ago
- ☆13Jun 16, 2024Updated last year
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Jul 11, 2023Updated 2 years ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆88Jul 28, 2025Updated 7 months ago
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- ☆21Feb 11, 2022Updated 4 years ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆359Nov 20, 2025Updated 3 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆31Mar 30, 2025Updated 11 months ago
- A collection of object-compositional modeling by implicit neural representation.☆58Aug 12, 2023Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Jul 2, 2024Updated last year
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Dec 3, 2021Updated 4 years ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆890Nov 26, 2025Updated 3 months ago
- ☆11Sep 7, 2024Updated last year
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- ☆19Nov 6, 2023Updated 2 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- ☆23Nov 26, 2024Updated last year
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- ACL 2023☆39Jun 6, 2023Updated 2 years ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆185Apr 16, 2024Updated last year