enyac-group / evol-q
Quantization in the Jagged Loss Landscape of Vision Transformers
☆10Updated last year
Related projects ⓘ
Alternatives and complementary repositories for evol-q
- DeiT implementation for Q-ViT☆23Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆34Updated last month
- ☆68Updated 2 years ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆51Updated last year
- Neural Network Quantization With Fractional Bit-widths☆12Updated 3 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆36Updated 3 years ago
- Pytorch implementation of our paper accepted by ICCV 2021 -- ReCU: Reviving the Dead Weights in Binary Neural Networks http://arxiv.org/a…☆39Updated 2 years ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆27Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆46Updated 2 years ago
- ☆20Updated 2 years ago
- Official Implementation of "Genie: Show Me the Data for Quantization" (CVPR 2023)☆17Updated last year
- ☆24Updated 2 years ago
- LSQ+ or LSQplus☆59Updated last year
- A PyTorch Framework for Efficient Pruning and Quantization for specialized accelerators.☆32Updated 2 years ago
- ☆18Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated 11 months ago
- ☆12Updated 2 years ago
- ☆33Updated 11 months ago
- ☆17Updated 2 years ago
- ☆15Updated 2 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆42Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆29Updated 3 months ago
- EQ-Net [ICCV 2023]☆26Updated last year
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆49Updated last year
- BitSplit Post-trining Quantization☆47Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆32Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 2 years ago
- Post-training sparsity-aware quantization☆33Updated last year
- An official implement of CVPR 2023 paper - NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision Transformers☆16Updated 8 months ago
- A collection of research papers on efficient training of DNNs☆68Updated 2 years ago