google-research / nested-transformerLinks
Nested Hierarchical Transformer https://arxiv.org/pdf/2105.12723.pdf
β200Updated last year
Alternatives and similar repositories for nested-transformer
Users that are interested in nested-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of the π Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbonesβ200Updated 4 years ago
- Implementation of ResMLP, an all MLP solution to image classification, in Pytorchβ201Updated 3 years ago
- β249Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorcβ¦β310Updated 3 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"β291Updated 3 years ago
- [Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021β167Updated 3 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)β192Updated 3 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformersβ234Updated 3 years ago
- β246Updated 4 years ago
- [NeurIPS 2021] Official codes for "Efficient Training of Visual Transformers with Small Datasets".β144Updated 11 months ago
- Code for the Convolutional Vision Transformer (ConViT)β470Updated 4 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorchβ264Updated 4 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.β142Updated 4 years ago
- β135Updated 2 years ago
- VICRegL official code baseβ231Updated 2 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachersβ¦β283Updated 2 years ago
- EsViT: Efficient self-supervised Vision Transformersβ411Updated 2 years ago
- β204Updated last year
- PyTorch reimplementation of the paper "Involution: Inverting the Inherence of Convolution for Visual Recognition" (2D and 3D Involution) β¦β105Updated 3 years ago
- PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformersβ229Updated 4 years ago
- Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)β193Updated 2 years ago
- β118Updated 3 years ago
- Official PyTorch implementation of Fully Attentional Networksβ481Updated 2 years ago
- Attention mechanismβ52Updated 4 years ago
- Repository providing a wide range of self-supervised pretrained models for computer vision tasks.β61Updated 4 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"β558Updated 3 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.β182Updated 3 years ago
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scaleβ305Updated 4 years ago
- [ICLR 2023] This repository includes the official implementation our paper "Can CNNs Be More Robust Than Transformers?"β143Updated 2 years ago
- A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasksβ94Updated 5 years ago