shantanuj / TDAM_Top_down_attention_moduleLinks
Official PyTorch implementation of "TDAM: Top-down attention module for CNNs"
☆11Updated 2 years ago
Alternatives and similar repositories for TDAM_Top_down_attention_module
Users that are interested in TDAM_Top_down_attention_module are comparing it to the libraries listed below
Sorting:
- Multi-head Recurrent Layer Attention for Vision Network☆19Updated 2 years ago
- [BMVC 2021] The official PyTorch implementation of Feature Fusion Vision Transformer for Fine-Grained Visual Categorization☆49Updated 2 years ago
- ☆26Updated 2 years ago
- Deep Networks with Recurrent Layer Aggregation☆28Updated 3 years ago
- ☆142Updated last year
- Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer☆48Updated last year
- Official Codes and Pretrained Models for Dynamic MLP, CVPR2022, https://arxiv.org/abs/2203.03253☆87Updated 3 years ago
- Code release for Your “An Erudite Fine-Grained Visual Classification Model (CVPR 2023)"☆16Updated 2 years ago
- [T-IP 2023] Code for exponential adaptive pooling for PyTorch☆82Updated 2 years ago
- ☆25Updated 11 months ago
- RAMS-Trans: Recurrent Attention Multi-scale Transformer for Fine-grained Image Recognition☆12Updated 3 years ago
- Code for Part-Guided Relational Transformers for Fine-Grained Visual Recognition, appeared in TIP 2021☆22Updated last year
- ☆33Updated 3 years ago
- This is an unofficial implementation of BOAT: Bilateral Local Attention Vision Transformer☆55Updated 3 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Updated 3 years ago
- This is an official implementation for "Making Vision Transformers Efficient from A Token Sparsification View".☆34Updated 4 months ago
- TCM: Temporal Correlation Module☆18Updated 4 years ago
- The official implementation for ALOFT (CVPR 2023).☆54Updated last year
- Official implement of "CAT: Cross Attention in Vision Transformer".☆160Updated 3 years ago
- Vision Transformers with Hierarchical Attention☆102Updated 9 months ago
- [CVPR 2023 Highlight] Masked Image Modeling with Local Multi-Scale Reconstruction☆49Updated last year
- The official repository of the paper "Learning Correlation Structures for Vision Transformers" accepted to CVPR 2024.☆48Updated last year
- [ICASSP 2024] Parallel Augmentation and Dual Enhancement for Occluded Person Re-identification☆18Updated last year
- ☆44Updated 2 years ago
- The official implementation of ELSA: Enhanced Local Self-Attention for Vision Transformer☆116Updated last year
- GroupMixAttention and GroupMixFormer☆118Updated last year
- ☆36Updated 3 years ago
- Feature-map visualized, Implementation in Pytorch☆40Updated 3 years ago
- AFNet(NeurIPS 2022)☆19Updated 2 years ago
- ☆28Updated last year