SHI-Labs / Neighborhood-Attention-TransformerLinks
Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
☆1,146Updated last year
Alternatives and similar repositories for Neighborhood-Attention-Transformer
Users that are interested in Neighborhood-Attention-Transformer are comparing it to the libraries listed below
Sorting:
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,355Updated last year
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atte…☆908Updated last year
- [NeurIPS 2022] Official code for "Focal Modulation Networks"☆741Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,077Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]☆1,081Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆819Updated 3 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆998Updated 3 years ago
- Code release for ConvNeXt V2 model☆1,853Updated last year
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmen…☆487Updated 2 years ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,422Updated 4 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆799Updated 4 months ago
- MetaFormer Baselines for Vision (TPAMI 2024)☆492Updated last year
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022☆584Updated last year
- An All-MLP solution for Vision, from Google AI☆1,048Updated 3 months ago
- Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021, spotlight)☆1,433Updated 3 years ago
- [ICLR 2023 Oral] Image as Set of Points☆571Updated last year
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆362Updated 8 months ago
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)☆930Updated last year
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆559Updated 3 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,110Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆617Updated 10 months ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆516Updated 2 years ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆880Updated 3 months ago
- [ICML 2023] Official PyTorch implementation of Global Context Vision Transformers☆440Updated last year
- A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes"☆391Updated 4 years ago
- Label-Efficient Semantic Segmentation with Diffusion Models (ICLR'2022)☆709Updated 2 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆930Updated last year
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"☆368Updated last year
- [ICLR2022] official implementation of UniFormer☆882Updated last year
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.☆583Updated 2 years ago