lukemelas / do-you-even-need-attention
Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)
☆480Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for do-you-even-need-attention
- Pre-trained NFNets with 99% of the accuracy of the official paper "High-Performance Large-Scale Image Recognition Without Normalization".☆159Updated 3 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/☆345Updated 10 months ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,062Updated 2 years ago
- Code to reproduce the results in the FAIR research papers "Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting V…☆487Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆222Updated 2 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆251Updated 3 years ago
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆208Updated this week
- EsViT: Efficient self-supervised Vision Transformers☆408Updated last year
- ☆241Updated 2 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆300Updated 2 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆187Updated 6 months ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆211Updated 3 years ago
- ☆365Updated last year
- Estimate/count FLOPS for a given neural network using pytorch☆304Updated 2 years ago
- Learning Rate Warmup in PyTorch☆392Updated this week
- Implementation of Linformer for Pytorch☆257Updated 10 months ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- Code for the Convolutional Vision Transformer (ConViT)☆462Updated 3 years ago
- A LARS implementation in PyTorch☆335Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆253Updated 3 years ago
- [NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"☆545Updated 2 years ago
- Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.☆318Updated last year
- PyTorch dataset extended with map, cache etc. (tensorflow.data like)☆328Updated 2 years ago
- ☆566Updated 3 weeks ago
- Implementing Lambda Networks using Pytorch☆138Updated 3 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆449Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆500Updated 2 weeks ago
- Official Pytorch Implementation of "TResNet: High-Performance GPU-Dedicated Architecture" (WACV 2021)☆471Updated this week
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆281Updated 2 years ago