google-research / nested-transformerLinks
Nested Hierarchical Transformer https://arxiv.org/pdf/2105.12723.pdf
β197Updated 11 months ago
Alternatives and similar repositories for nested-transformer
Users that are interested in nested-transformer are comparing it to the libraries listed below
Sorting:
- β247Updated 3 years ago
- Implementation of the π Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbonesβ199Updated 4 years ago
- [Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021β167Updated 2 years ago
- [NeurIPS 2021] Official codes for "Efficient Training of Visual Transformers with Small Datasets".β144Updated 6 months ago
- β247Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"β289Updated 3 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)β191Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorcβ¦β305Updated 3 years ago
- Implementation of ResMLP, an all MLP solution to image classification, in Pytorchβ198Updated 2 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformersβ232Updated 3 years ago
- β134Updated 2 years ago
- β200Updated 11 months ago
- Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)β193Updated 2 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.β141Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)β466Updated 3 years ago
- β119Updated 3 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorchβ259Updated 4 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachersβ¦β276Updated 2 years ago
- Repository providing a wide range of self-supervised pretrained models for computer vision tasks.β61Updated 4 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformersβ226Updated 4 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.β179Updated 3 years ago
- Implementation of Convolutional enhanced image Transformerβ105Updated 4 years ago
- VICRegL official code baseβ228Updated 2 years ago
- β138Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"β556Updated 3 years ago
- Self-supervised vIsion Transformer (SiT)β336Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)β486Updated 4 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, deβ¦β101Updated 3 years ago
- (Unofficial) PyTorch implementation of the paper Early Convolutions Help Transformers See Betterβ43Updated 3 years ago
- PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) β¦β105Updated 3 years ago