wimh966 / outlier_suppressionView external linksLinks
The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
☆49Oct 5, 2022Updated 3 years ago
Alternatives and similar repositories for outlier_suppression
Users that are interested in outlier_suppression are comparing it to the libraries listed below
Sorting:
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- DeiT implementation for Q-ViT☆25Apr 21, 2025Updated 9 months ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆289Aug 1, 2021Updated 4 years ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆127Sep 23, 2025Updated 4 months ago
- ☆169Mar 9, 2023Updated 2 years ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆38Oct 3, 2023Updated 2 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Mar 29, 2024Updated last year
- Quantization in the Jagged Loss Landscape of Vision Transformers☆13Oct 22, 2023Updated 2 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated last year
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆22Jan 13, 2023Updated 3 years ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆360Apr 11, 2023Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆237Sep 29, 2023Updated 2 years ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated last year
- ☆17Oct 25, 2022Updated 3 years ago
- Reorder-based post-training quantization for large language model☆198May 17, 2023Updated 2 years ago
- Pytorch implementation of our paper accepted by ECCV 2022-- Fine-grained Data Distribution Alignment for Post-Training Quantization☆15Sep 13, 2022Updated 3 years ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.☆138Apr 28, 2022Updated 3 years ago
- ☆113Nov 17, 2023Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆265Jan 29, 2023Updated 3 years ago
- ☆36Mar 29, 2023Updated 2 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆238Jul 19, 2022Updated 3 years ago
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆310May 8, 2024Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆127Jun 27, 2023Updated 2 years ago
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆44Nov 25, 2021Updated 4 years ago
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆36Mar 2, 2022Updated 3 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆482Nov 26, 2024Updated last year
- ☆57Dec 8, 2020Updated 5 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,607Jul 12, 2024Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆220Dec 15, 2023Updated 2 years ago
- TQT's pytorch implementation.☆21Dec 17, 2021Updated 4 years ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Codes for AAAI2019 paper: Deep Neural Network Quantization via Layer-Wise Optimization using Limited Training Data☆41Jan 22, 2019Updated 7 years ago
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆12Jun 7, 2023Updated 2 years ago