GoatWu / AdaLog
[ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer
☆18Updated 3 months ago
Alternatives and similar repositories for AdaLog:
Users that are interested in AdaLog are comparing it to the libraries listed below
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆52Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- (ICCV 2023) Official implementation of Rectified Straight Through Estimator (ReSTE).☆28Updated 5 months ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆64Updated 10 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆55Updated last year
- DeiT implementation for Q-ViT☆25Updated 2 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆14Updated 6 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆88Updated last year
- LLM Inference with Microscaling Format☆19Updated 4 months ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆41Updated 5 months ago
- BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models☆32Updated last year
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆76Updated 9 months ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆30Updated last year
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆25Updated 4 months ago
- ☆21Updated last year
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆21Updated 11 months ago
- LSQ+ or LSQplus☆63Updated last month
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆34Updated 9 months ago
- ☆75Updated 2 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- The official implementation of BiViT: Extremely Compressed Binary Vision Transformers☆14Updated last year
- The official implementation of the AAAI 2024 paper Bi-ViT.☆9Updated last year
- super-resolution; post-training quantization; model compression☆11Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆33Updated this week
- ☆18Updated 4 months ago
- ☆18Updated 3 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆93Updated last year
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆67Updated 2 months ago