idiap / fast-transformersLinks
Pytorch library for fast transformer implementations
☆1,723Updated 2 years ago
Alternatives and similar repositories for fast-transformers
Users that are interested in fast-transformers are comparing it to the libraries listed below
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,136Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,174Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆784Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆759Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,156Updated last year
- My take on a practical implementation of Linformer for Pytorch.☆416Updated 2 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,579Updated 4 years ago
- An All-MLP solution for Vision, from Google AI☆1,029Updated this week
- Longformer: The Long-Document Transformer☆2,145Updated 2 years ago
- Profiling and inspecting memory in pytorch☆1,061Updated 11 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,134Updated last year
- An implementation of local windowed attention for language modeling☆460Updated 5 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆707Updated this week
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Updated last year
- torch-optimizer -- collection of optimizers for Pytorch☆3,124Updated last year
- Structured state space sequence models☆2,678Updated 11 months ago
- Standalone TFRecord reader/writer with PyTorch data loaders☆886Updated last month
- Fully featured implementation of Routing Transformer☆295Updated 3 years ago
- Fast, differentiable sorting and ranking in PyTorch☆817Updated last month
- DeLighT: Very Deep and Light-Weight Transformers☆469Updated 4 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,063Updated 11 months ago
- Implementation of Linformer for Pytorch☆290Updated last year
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆795Updated last month
- Transformers for Longer Sequences☆617Updated 2 years ago
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,074Updated 2 years ago
- Learning Rate Warmup in PyTorch☆411Updated 3 weeks ago
- Rotary Transformer☆974Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,337Updated 2 months ago
- PyTorch implementation of some attentions for Deep Learning Researchers.☆534Updated 3 years ago