[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"
☆39Mar 11, 2024Updated last year
Alternatives and similar repositories for QLLM
Users that are interested in QLLM are comparing it to the libraries listed below
Sorting:
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Mar 29, 2024Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- Structured Binary Neural Networks for Image Recognition☆18Nov 18, 2021Updated 4 years ago
- Structured Binary Neural Networks for Image Recognition☆16Oct 12, 2022Updated 3 years ago
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆16Jun 18, 2023Updated 2 years ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Oct 7, 2021Updated 4 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- [ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"☆33Dec 30, 2021Updated 4 years ago
- ☆21Feb 5, 2024Updated 2 years ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Mar 16, 2025Updated 11 months ago
- [CVPR 2025] APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers☆38Apr 7, 2025Updated 10 months ago
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆104Feb 26, 2024Updated 2 years ago
- ☆19Nov 6, 2023Updated 2 years ago
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆44Nov 25, 2021Updated 4 years ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆68Jun 4, 2024Updated last year
- ☆21Feb 11, 2022Updated 4 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Dec 3, 2021Updated 4 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Jul 11, 2023Updated 2 years ago
- ☆13Jun 22, 2025Updated 8 months ago
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆45Aug 19, 2021Updated 4 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Nov 15, 2022Updated 3 years ago
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- DNN quantization with outlier channel splitting (ICML'19)☆113Mar 21, 2020Updated 5 years ago
- Unofficial Scalable-Softmax Is Superior for Attention☆20May 30, 2025Updated 8 months ago
- Pytorch implementation of our paper (TNNLS) -- Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters☆12Feb 24, 2022Updated 4 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- The official implementation of "NAS-BNN: Neural Architecture Search for Binary Neural Networks"☆13Aug 30, 2024Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆358Nov 20, 2025Updated 3 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆32Mar 30, 2025Updated 10 months ago
- ☆73Dec 16, 2025Updated 2 months ago
- BitSplit Post-trining Quantization☆50Dec 20, 2021Updated 4 years ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆79Mar 17, 2025Updated 11 months ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆97Jan 3, 2025Updated last year