lucidrains / reformer-pytorchLinks
Reformer, the efficient Transformer, in Pytorch
☆2,192Updated 2 years ago
Alternatives and similar repositories for reformer-pytorch
Users that are interested in reformer-pytorch are comparing it to the libraries listed below
Sorting:
- Pytorch library for fast transformer implementations☆1,760Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Updated 3 years ago
- Longformer: The Long-Document Transformer☆2,183Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆825Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,608Updated 5 years ago
- My take on a practical implementation of Linformer for Pytorch.☆422Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆776Updated 2 years ago
- ☆3,680Updated 3 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,193Updated 2 years ago
- Transformers for Longer Sequences☆626Updated 3 years ago
- An All-MLP solution for Vision, from Google AI☆1,056Updated 6 months ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Updated last year
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 5 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,067Updated last year
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆575Updated 5 years ago
- PyTorch implementation of some attentions for Deep Learning Researchers.☆547Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,224Updated last year
- torch-optimizer -- collection of optimizers for Pytorch☆3,162Updated last year
- list of efficient attention modules☆1,022Updated 4 years ago
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,116Updated 3 years ago
- Hopfield Networks is All You Need☆1,896Updated 2 years ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,208Updated 2 years ago
- Unsupervised Data Augmentation (UDA)☆2,205Updated 4 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,180Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆709Updated 10 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆846Updated 2 years ago
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,532Updated 5 years ago
- Fully featured implementation of Routing Transformer☆300Updated 4 years ago
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆430Updated 4 years ago
- Transformer training code for sequential tasks☆610Updated 4 years ago