mit-han-lab / sparsevitLinks
[CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
☆74Updated last year
Alternatives and similar repositories for sparsevit
Users that are interested in sparsevit are comparing it to the libraries listed below
Sorting:
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆100Updated 2 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆129Updated 2 years ago
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆31Updated 9 months ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆76Updated last year
- ☆12Updated last year
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste …☆37Updated last year
- Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"☆34Updated last year
- ☆47Updated 2 years ago
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆34Updated 2 years ago
- ☆35Updated 2 years ago
- ALGM applied to Segmenter☆29Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Updated last year
- ImageNet-1K data download, processing for using as a dataset☆109Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆26Updated 10 months ago
- ☆53Updated last year
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆80Updated 5 months ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆45Updated 11 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆53Updated 2 years ago
- ☆27Updated 2 years ago
- The official implementation of the AAAI 2024 paper Bi-ViT.☆10Updated last year
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆24Updated 11 months ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 2 years ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆222Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆161Updated 3 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆191Updated 2 years ago