The official implementation of the NeurIPS 2022 paper Q-ViT.
☆105May 22, 2023Updated 2 years ago
Alternatives and similar repositories for Q-ViT
Users that are interested in Q-ViT are comparing it to the libraries listed below
Sorting:
- Post-Training Quantization for Vision transformers.☆238Jul 19, 2022Updated 3 years ago
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆359Apr 11, 2023Updated 2 years ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆39Oct 3, 2023Updated 2 years ago
- DeiT implementation for Q-ViT☆25Apr 21, 2025Updated 10 months ago
- [ECCV 2022] Patch Similarity Aware Data-Free Quantization for Vision Transformers☆123Dec 22, 2022Updated 3 years ago
- [ICCV 2023] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers☆140Jan 10, 2024Updated 2 years ago
- ☆11Jan 10, 2025Updated last year
- [ICCV 2023] I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference☆200Sep 2, 2024Updated last year
- [ECCV 2024] SparseRefine: Sparse Refinement for Efficient High-Resolution Semantic Segmentation☆14Jan 10, 2025Updated last year
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- ☆36Sep 3, 2023Updated 2 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Aug 30, 2024Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago
- official implementation of Generative Low-bitwidth Data Free Quantization(GDFQ)☆55Jul 23, 2023Updated 2 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Nov 15, 2020Updated 5 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆290Aug 1, 2021Updated 4 years ago
- ☆20Nov 23, 2022Updated 3 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆139Nov 19, 2020Updated 5 years ago
- ☆14Oct 24, 2022Updated 3 years ago
- MINT, Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks, ASP-DAC 2024, Nominated for Best Paper Award☆16Apr 12, 2024Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Model Quantization Benchmark☆18Sep 30, 2025Updated 5 months ago
- ☆30Jul 22, 2024Updated last year
- [ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"☆32Jul 24, 2022Updated 3 years ago
- ☆25Dec 11, 2021Updated 4 years ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆48Sep 27, 2024Updated last year
- ☆12Nov 17, 2023Updated 2 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆265Jan 29, 2023Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Dec 1, 2023Updated 2 years ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆54Mar 27, 2024Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆71Jul 8, 2025Updated 8 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Oct 5, 2022Updated 3 years ago
- QT-DOG: QUANTIZATION-AWARE TRAINING FOR DOMAIN GENERALIZATION☆23Nov 30, 2025Updated 3 months ago
- Unofficial implementation of LSQ-Net, a neural network quantization framework☆309May 8, 2024Updated last year
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago