lucidrains / reformer-pytorch
Reformer, the efficient Transformer, in Pytorch
☆2,155Updated last year
Alternatives and similar repositories for reformer-pytorch:
Users that are interested in reformer-pytorch are comparing it to the libraries listed below
- Pytorch library for fast transformer implementations☆1,687Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,116Updated 3 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆751Updated 10 months ago
- Longformer: The Long-Document Transformer☆2,094Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆413Updated 2 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,562Updated 4 years ago
- ☆3,639Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆748Updated last year
- list of efficient attention modules☆996Updated 3 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,092Updated last year
- An All-MLP solution for Vision, from Google AI☆1,015Updated 6 months ago
- DeLighT: Very Deep and Light-Weight Transformers☆468Updated 4 years ago
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,531Updated 4 years ago
- Transformer training code for sequential tasks☆610Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆606Updated 8 months ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,060Updated 7 months ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,112Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,079Updated 11 months ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,610Updated 3 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,134Updated last year
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Updated 3 years ago
- Fully featured implementation of Routing Transformer☆289Updated 3 years ago
- Transformers for Longer Sequences☆596Updated 2 years ago
- Hopfield Networks is All You Need☆1,784Updated last year
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆428Updated 9 months ago
- Unsupervised Data Augmentation (UDA)☆2,188Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆704Updated last year
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆360Updated last year
- Simple XLNet implementation with Pytorch Wrapper☆582Updated 5 years ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,197Updated last year