mit-han-lab / sparsevit
[CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer
☆68Updated last year
Alternatives and similar repositories for sparsevit
Users that are interested in sparsevit are comparing it to the libraries listed below
Sorting:
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆95Updated last year
- ALGM applied to Segmenter☆24Updated 11 months ago
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆35Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆27Updated 5 months ago
- Official implementation of "SViT: Revisiting Token Pruning for Object Detection and Instance Segmentation"☆32Updated last year
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆77Updated last month
- [ECCV 2024] Isomorphic Pruning for Vision Models☆68Updated 9 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆43Updated 3 months ago
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆116Updated 2 years ago
- This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆77Updated 10 months ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆23Updated 5 months ago
- This is the official pytorch implementation for the paper: Towards Accurate Post-training Quantization for Diffusion Models.(CVPR24 Poste…☆35Updated 11 months ago
- Official repository of InLine attention (NeurIPS 2024)☆46Updated 4 months ago
- This is the official pytorch implementation for the paper: *Quantformer: Learning Extremely Low-precision Vision Transformers*.☆23Updated 2 years ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated last year
- ☆12Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- ☆45Updated last year
- ☆33Updated last year
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆52Updated 2 years ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆91Updated 11 months ago
- The official implementation of the AAAI 2024 paper Bi-ViT.☆10Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆221Updated 8 months ago
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆73Updated 10 months ago
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆86Updated last year
- [ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion☆33Updated 7 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆73Updated 4 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆88Updated last year