JDAI-CV / CoTNet
This is an official implementation for "Contextual Transformer Networks for Visual Recognition".
☆519Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for CoTNet
- Official repository of ACmix (CVPR2022)☆402Updated 2 years ago
- Bottleneck Transformers for Visual Recognition☆274Updated 3 years ago
- This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".☆281Updated 2 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆547Updated last year
- [ECCV 2022] Source code of "EdgeFormer: Improving Light-weight ConvNets by Learning from Vision Transformers"☆349Updated last year
- Two simple and effective designs of vision transformer, which is on par with the Swin transformer☆580Updated last year
- [ICCV 2021] Code for approximated exponential maximum pooling☆290Updated last year
- [NeurIPS 2022] HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions☆321Updated 11 months ago
- The official pytorch implemention of our ICML-2021 paper "SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Netwo…☆409Updated 2 years ago
- Official code for Conformer: Local Features Coupling Global Representations for Visual Recognition☆547Updated 3 years ago
- Official MegEngine implementation of RepLKNet☆268Updated 2 years ago
- Code for our CVPR2021 paper coordinate attention☆1,017Updated 3 years ago
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆870Updated 6 months ago
- Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks☆251Updated 3 years ago
- FcaNet: Frequency Channel Attention Networks☆274Updated 3 years ago
- RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality (CVPR 2022)☆303Updated last year
- The official code for the paper: https://openreview.net/forum?id=_PHymLIxuI☆362Updated 10 months ago
- Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]☆412Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆545Updated 2 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆425Updated last year
- Official implementation of PVT series☆1,729Updated 2 years ago
- FcaNet: Frequency Channel Attention Networks☆507Updated 3 years ago
- This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"☆243Updated 3 years ago
- Diverse Branch Block: Building a Convolution as an Inception-like Unit☆323Updated last year
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆330Updated 9 months ago
- This is an official implementation for "Self-Supervised Learning with Swin Transformers".☆627Updated 3 years ago
- TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022☆385Updated 2 years ago
- ☆210Updated 2 years ago
- A PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer"☆504Updated 2 years ago
- Simple implementation of Mobile-Former on Pytorch☆108Updated 3 years ago