facebookresearch / LeViTLinks
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference
☆612Updated 2 years ago
Alternatives and similar repositories for LeViT
Users that are interested in LeViT are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer"☆533Updated 3 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆601Updated 2 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆427Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆608Updated last year
- RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality (CVPR 2022)☆306Updated 2 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆285Updated 2 years ago
- Official MegEngine implementation of RepLKNet☆275Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆556Updated 3 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆576Updated 2 years ago
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆909Updated last year
- ☆318Updated 3 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆227Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,049Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,333Updated last year
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆271Updated last year
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆231Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆465Updated 3 years ago
- MetaFormer Baselines for Vision (TPAMI 2024)☆465Updated last year
- Bottleneck Transformers for Visual Recognition☆278Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- [ECCV 2022] Source code of "EdgeFormer: Improving Light-weight ConvNets by Learning from Vision Transformers"☆353Updated 2 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆569Updated last year
- Diverse Branch Block: Building a Convolution as an Inception-like Unit☆337Updated 2 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆288Updated 3 years ago
- Official PyTorch implementation of Fully Attentional Networks☆478Updated 2 years ago
- ☆246Updated 3 years ago
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆477Updated last year
- ☆335Updated 2 years ago
- Official Code for "Non-deep Networks"☆585Updated 2 years ago
- ☆647Updated 2 years ago