tobna / WhatTransformerToFavorLinks
Github repository for the paper Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers.
☆33Updated 10 months ago
Alternatives and similar repositories for WhatTransformerToFavor
Users that are interested in WhatTransformerToFavor are comparing it to the libraries listed below
Sorting:
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆106Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆87Updated last year
- ResMLP: Feedforward networks for image classification with data-efficient training☆45Updated 4 years ago
- The offical implementation of [NeurIPS2024] Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation https://ar…☆49Updated last year
- A Close Look at Spatial Modeling: From Attention to Convolution☆92Updated 3 years ago
- [ECCV 2022] Implementation of the paper "Locality Guidance for Improving Vision Transformers on Tiny Datasets"☆82Updated 3 years ago
- ☆48Updated 2 years ago
- ☆28Updated 2 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated 2 years ago
- [ECCV 2022] EdgeViT: Competing Light-weight CNNs on Mobile Devices with Vision Transformers☆115Updated 2 years ago
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆108Updated last year
- ☆23Updated 2 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Updated 3 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆66Updated 3 years ago
- ImageNet-1K data download, processing for using as a dataset☆125Updated 3 years ago
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆29Updated 3 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆155Updated 3 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Updated 3 years ago
- ☆40Updated 2 years ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆250Updated 2 years ago
- Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.☆106Updated 2 years ago
- EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm☆35Updated 3 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆105Updated 2 years ago
- This is an official implementation of our NeurIPS 2022 paper "Bridging the Gap Between Vision Transformers and Convolutional Neural Netwo…☆63Updated 5 months ago
- Unified Architecture Search with Convolution, Transformer, and MLP (ECCV 2022)☆53Updated 3 years ago
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 3 years ago
- ☆63Updated 4 years ago
- ☆43Updated 2 years ago