lucidrains / reformer-pytorch
Reformer, the efficient Transformer, in Pytorch
☆2,097Updated last year
Related projects: ⓘ
- Pytorch library for fast transformer implementations☆1,621Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,080Updated 2 years ago
- Longformer: The Long-Document Transformer☆2,028Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,513Updated 4 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆670Updated 4 months ago
- ☆3,600Updated last year
- My take on a practical implementation of Linformer for Pytorch.☆403Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆711Updated 9 months ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,012Updated 5 months ago
- list of efficient attention modules☆988Updated 3 years ago
- Transformers for Longer Sequences☆564Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆546Updated 3 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,077Updated last year
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,581Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆465Updated 3 years ago
- Unsupervised Data Augmentation (UDA)☆2,172Updated 3 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,044Updated last month
- Hopfield Networks is All You Need☆1,660Updated last year
- Structured state space sequence models☆2,361Updated 2 months ago
- Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase☆1,185Updated 8 months ago
- An All-MLP solution for Vision, from Google AI☆987Updated this week
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆596Updated 2 months ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,177Updated 2 years ago
- Source code for "On the Relationship between Self-Attention and Convolutional Layers"☆1,075Updated last year
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,333Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆941Updated 5 months ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,872Updated last year
- On the Variance of the Adaptive Learning Rate and Beyond☆2,535Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,149Updated 2 weeks ago
- PyTorch implementation of some attentions for Deep Learning Researchers.☆511Updated 2 years ago