mit-han-lab / sparsevitLinks
[CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
☆69Updated last year
Alternatives and similar repositories for sparsevit
Users that are interested in sparsevit are comparing it to the libraries listed below
Sorting:
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆96Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- ALGM applied to Segmenter☆24Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆119Updated 2 years ago
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆35Updated last year
- [ECCV 2024] Isomorphic Pruning for Vision Models☆68Updated 10 months ago
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆36Updated last year
- Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"☆32Updated last year
- ☆45Updated last year
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆23Updated 6 months ago
- ☆12Updated last year
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆44Updated 3 months ago
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆77Updated last month
- This is the official pytorch implementation for the paper: *Quantformer: Learning Extremely Low-precision Vision Transformers*.☆23Updated 2 years ago
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆29Updated 5 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆73Updated 5 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆156Updated 2 years ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆77Updated this week
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆101Updated last year
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆52Updated 2 years ago
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆11Updated 3 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆38Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 5 months ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆185Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
- ☆108Updated last year
- ☆33Updated last year
- ImageNet-1K data download, processing for using as a dataset☆99Updated 2 years ago