facebookresearch / LeViT
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference
☆606Updated 2 years ago
Alternatives and similar repositories for LeViT:
Users that are interested in LeViT are comparing it to the libraries listed below
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆549Updated 2 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆566Updated last year
- ☆313Updated 2 years ago
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆597Updated 2 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆426Updated last year
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆227Updated 2 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆264Updated last year
- [NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification☆591Updated last year
- A PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer"☆515Updated 3 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,013Updated last year
- Official PyTorch implementation of Fully Attentional Networks☆476Updated last year
- Official MegEngine implementation of RepLKNet☆273Updated 2 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆282Updated 2 years ago
- RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality (CVPR 2022)☆306Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,312Updated 8 months ago
- MetaFormer Baselines for Vision (TPAMI 2024)☆440Updated 8 months ago
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆891Updated 9 months ago
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆345Updated last year
- [NeurIPS 2022] HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions☆327Updated last year
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆282Updated 2 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 3 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆231Updated 3 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆557Updated last year
- ☆191Updated 2 years ago
- Diverse Branch Block: Building a Convolution as an Inception-like Unit☆331Updated 2 years ago
- ☆245Updated 2 years ago
- [ECCV 2022] Source code of "EdgeFormer: Improving Light-weight ConvNets by Learning from Vision Transformers"☆349Updated 2 years ago
- Official Code for "Non-deep Networks"☆585Updated 2 years ago
- ☆212Updated 3 years ago