lukemelas / do-you-even-need-attention
Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)
☆483Updated 3 years ago
Alternatives and similar repositories for do-you-even-need-attention:
Users that are interested in do-you-even-need-attention are comparing it to the libraries listed below
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆344Updated last year
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆217Updated this week
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆515Updated 3 months ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆212Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆467Updated 3 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆305Updated 3 years ago
- A LARS implementation in PyTorch☆342Updated 5 years ago
- ☆374Updated last year
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆258Updated 3 years ago
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".☆159Updated 3 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,065Updated 2 years ago
- Code to reproduce the results in the FAIR research papers "Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting V…☆488Updated last year
- Fully featured implementation of Routing Transformer☆289Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformers☆411Updated last year
- ☆245Updated 2 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆304Updated 2 years ago
- Official code Cross-Covariance Image Transformer (XCiT)☆663Updated 3 years ago
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 3 years ago
- Ranger deep learning optimizer rewrite to use newest components☆328Updated last year
- Implementation of Linformer for Pytorch☆270Updated last year
- Compute CNN receptive field size in pytorch in one line☆357Updated 9 months ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆198Updated 3 years ago
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆425Updated 3 years ago
- A Pytorch-Lightning implementation of self-supervised algorithms☆537Updated 2 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆549Updated 2 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆257Updated 4 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆421Updated 5 months ago
- Code for Noisy Student Training. https://arxiv.org/abs/1911.04252☆759Updated 3 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆187Updated 10 months ago