lucidrains / reformer-pytorchLinks
Reformer, the efficient Transformer, in Pytorch
☆2,181Updated 2 years ago
Alternatives and similar repositories for reformer-pytorch
Users that are interested in reformer-pytorch are comparing it to the libraries listed below
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,148Updated 3 years ago
- Pytorch library for fast transformer implementations☆1,736Updated 2 years ago
- Longformer: The Long-Document Transformer☆2,167Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆798Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,588Updated 5 years ago
- My take on a practical implementation of Linformer for Pytorch.☆419Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆764Updated last year
- ☆3,674Updated 3 years ago
- Transformers for Longer Sequences☆617Updated 3 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,172Updated 2 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Updated last year
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,110Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 4 years ago
- PyTorch implementation of some attentions for Deep Learning Researchers.☆544Updated 3 years ago
- An All-MLP solution for Vision, from Google AI☆1,045Updated 2 months ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,065Updated last year
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,203Updated last year
- list of efficient attention modules☆1,012Updated 4 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆562Updated 5 years ago
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,530Updated 4 years ago
- Transformer training code for sequential tasks☆611Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆707Updated 6 months ago
- Rotary Transformer☆1,032Updated 3 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,184Updated 3 years ago
- A fast MoE impl for PyTorch☆1,795Updated 7 months ago
- Fully featured implementation of Routing Transformer☆298Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,179Updated last year
- Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch☆430Updated 4 years ago
- Structured state space sequence models☆2,734Updated last year
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,409Updated 2 years ago