lucidrains / reformer-pytorchLinks
Reformer, the efficient Transformer, in Pytorch
☆2,191Updated 2 years ago
Alternatives and similar repositories for reformer-pytorch
Users that are interested in reformer-pytorch are comparing it to the libraries listed below
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,169Updated 3 years ago
- Pytorch library for fast transformer implementations☆1,755Updated 2 years ago
- Longformer: The Long-Document Transformer☆2,177Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆818Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,601Updated 5 years ago
- My take on a practical implementation of Linformer for Pytorch.☆422Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆770Updated 2 years ago
- ☆3,682Updated 3 years ago
- Transformers for Longer Sequences☆622Updated 3 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,066Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Updated last year
- An All-MLP solution for Vision, from Google AI☆1,054Updated 5 months ago
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,117Updated 2 years ago
- list of efficient attention modules☆1,019Updated 4 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 5 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,185Updated 2 years ago
- Hopfield Networks is All You Need☆1,883Updated 2 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Updated 4 years ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,208Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆574Updated 5 years ago
- FastFormers - highly efficient transformer models for NLU☆708Updated 9 months ago
- Transformer training code for sequential tasks☆610Updated 4 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 5 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,366Updated last year
- Fully featured implementation of Routing Transformer☆299Updated 4 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,158Updated last year
- PyTorch implementation of some attentions for Deep Learning Researchers.☆548Updated 3 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,122Updated 3 years ago
- Structured state space sequence models☆2,798Updated last year
- Transformer implementation in PyTorch.☆490Updated 6 years ago