SHI-Labs / Neighborhood-Attention-TransformerLinks
Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
β1,135Updated last year
Alternatives and similar repositories for Neighborhood-Attention-Transformer
Users that are interested in Neighborhood-Attention-Transformer are comparing it to the libraries listed below
Sorting:
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,347Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,076Updated 2 years ago
- [NeurIPS 2022] Official code for "Focal Modulation Networks"β740Updated last year
- [ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmenβ¦β479Updated 2 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".β988Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"β819Updated 3 years ago
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atteβ¦β887Updated last year
- Code release for ConvNeXt V2 modelβ1,801Updated 11 months ago
- MetaFormer Baselines for Vision (TPAMI 2024)β477Updated last year
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]β1,061Updated last year
- An All-MLP solution for Vision, from Google AIβ1,034Updated 3 weeks ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictionsβ1,388Updated last month
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"β556Updated 3 years ago
- [ICLR 2023 Oral] Image as Set of Pointsβ570Updated last year
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Modelsβ797Updated last month
- This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.β581Updated 2 years ago
- CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped, CVPR 2022β577Updated last year
- This is a collection of our NAS and Vision Transformer work.β1,785Updated last year
- Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (CVPR 2022)β923Updated last year
- Per-Pixel Classification is Not All You Need for Semantic Segmentation (NeurIPS 2021, spotlight)β1,418Updated 3 years ago
- [ICLR2022] official implementation of UniFormerβ876Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022β587Updated 2 years ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attentionβ867Updated last week
- [ECCV 2022]Code for paper "DaViT: Dual Attention Vision Transformer"β361Updated last year
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"β346Updated 5 months ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch modelβ600Updated 7 months ago
- ConvMAE: Masked Convolution Meets Masked Autoencodersβ506Updated 2 years ago
- Label-Efficient Semantic Segmentation with Diffusion Models (ICLR'2022)β705Updated 2 years ago
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)β920Updated last year
- Two simple and effective designs of vision transformer, which is on par with the Swin transformerβ604Updated 2 years ago