google-research / nested-transformerLinks
Nested Hierarchical Transformer https://arxiv.org/pdf/2105.12723.pdf
β198Updated last year
Alternatives and similar repositories for nested-transformer
Users that are interested in nested-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of the π Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbonesβ200Updated 4 years ago
- β249Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"β291Updated 3 years ago
- Implementation of ResMLP, an all MLP solution to image classification, in Pytorchβ200Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorcβ¦β309Updated 3 years ago
- β246Updated 4 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)β192Updated 3 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformersβ234Updated 3 years ago
- β135Updated 2 years ago
- [Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021β167Updated 3 years ago
- [NeurIPS 2021] Official codes for "Efficient Training of Visual Transformers with Small Datasets".β144Updated 9 months ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.β140Updated 3 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, deβ¦β102Updated 3 years ago
- β202Updated last year
- Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)β193Updated 2 years ago
- PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) β¦β105Updated 3 years ago
- β118Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformersβ412Updated 2 years ago
- β140Updated 3 years ago
- An implementation of the efficient attention module.β321Updated 4 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.β180Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)β469Updated 4 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachersβ¦β279Updated 2 years ago
- β193Updated 2 years ago
- Official PyTorch implementation of Fully Attentional Networksβ480Updated 2 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformersβ228Updated 4 years ago
- Implementation of Convolutional enhanced image Transformerβ105Updated 4 years ago
- LeViT a Vision Transformer in ConvNet's Clothing for Faster Inferenceβ618Updated 3 years ago
- VICRegL official code baseβ231Updated 2 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".β160Updated 4 years ago