DD-DuDa / awesome-vit-quantization-acceleration
List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.
☆73Updated 8 months ago
Alternatives and similar repositories for awesome-vit-quantization-acceleration:
Users that are interested in awesome-vit-quantization-acceleration are comparing it to the libraries listed below
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆102Updated last year
- ☆90Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆52Updated last year
- ☆21Updated 11 months ago
- LSQ+ or LSQplus☆63Updated 2 weeks ago
- DeiT implementation for Q-ViT☆24Updated 2 years ago
- Post-Training Quantization for Vision transformers.☆204Updated 2 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆82Updated 5 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆86Updated last year
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆19Updated last year
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆115Updated last year
- EQ-Net [ICCV 2023]☆27Updated last year
- ☆21Updated this week
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 7 months ago
- ☆43Updated 3 years ago
- A co-design architecture on sparse attention☆51Updated 3 years ago
- ☆75Updated 2 years ago
- [TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers☆34Updated last year
- LLM Inference with Microscaling Format☆19Updated 3 months ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆103Updated 9 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆57Updated 11 months ago
- ☆51Updated 10 months ago
- Code Repository of Evaluating Quantized Large Language Models☆116Updated 5 months ago
- ☆136Updated last year
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- Torch2Chip (MLSys, 2024)☆51Updated 2 weeks ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆54Updated 11 months ago
- ViTALiTy (HPCA'23) Code Repository☆21Updated last year
- An FPGA Accelerator for Transformer Inference☆76Updated 2 years ago