NVlabs / A-ViTLinks
Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)
☆162Updated 3 years ago
Alternatives and similar repositories for A-ViT
Users that are interested in A-ViT are comparing it to the libraries listed below
Sorting:
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆192Updated 2 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆104Updated last year
- [CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".☆55Updated 3 years ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆107Updated 2 years ago
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Updated 2 years ago
- Official Codes for "Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality"☆244Updated 2 years ago
- ☆262Updated 2 years ago
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604…☆83Updated 3 years ago
- TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers (ECCV 2022)☆94Updated 3 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆248Updated 2 years ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆198Updated 2 years ago
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆57Updated last year
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆73Updated 3 years ago
- A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".☆83Updated last year
- LoMaR (Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction)☆65Updated 6 months ago
- MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning☆145Updated 2 years ago
- Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.☆152Updated 3 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆221Updated 2 years ago
- Accelerating T2t-ViT by 1.6-3.6x.☆255Updated 3 years ago
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆628Updated 2 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆186Updated last year
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆370Updated 3 years ago
- FastMIM, official pytorch implementation of our paper "FastMIM: Expediting Masked Image Modeling Pre-training for Vision"(https://arxiv.o…☆39Updated 2 years ago
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆131Updated 2 years ago
- [CVPR 2022] This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.☆156Updated 2 years ago
- ☆62Updated 2 years ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆114Updated 2 years ago
- Official codes for ConMIM (ICLR 2023)☆58Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Updated last month