GATECH-EIC / Castling-ViTLinks
[CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
☆30Updated last year
Alternatives and similar repositories for Castling-ViT
Users that are interested in Castling-ViT are comparing it to the libraries listed below
Sorting:
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆72Updated last year
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆78Updated 3 months ago
- ☆46Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆99Updated 2 years ago
- Official PyTorch implementation of Which Tokens to Use? Investigating Token Reduction in Vision Transformers presented at ICCV 2023 NIVT …☆34Updated last year
- Official repository of InLine attention (NeurIPS 2024)☆49Updated 6 months ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆53Updated 2 months ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆40Updated 5 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆70Updated 11 months ago
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆28Updated 4 months ago
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 9 months ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆24Updated 7 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆82Updated last month
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆72Updated 2 years ago
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆28Updated 2 years ago
- ☆30Updated last year
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆25Updated 4 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 6 months ago
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆33Updated 6 months ago
- ☆12Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆32Updated 9 months ago
- Adapting LLaMA Decoder to Vision Transformer☆28Updated last year
- [CVPR 2025] Official PyTorch implementation of MaskSub "Masking meets Supervision: A Strong Learning Alliance"☆45Updated 3 months ago
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- Official implementation of CVPR 2024 paper "Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers".☆39Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆88Updated last year
- PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]☆18Updated last year
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆29Updated last month