Pytorch library for fast transformer implementations
☆1,769Mar 23, 2023Updated 3 years ago
Alternatives and similar repositories for fast-transformers
Users that are interested in fast-transformers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆828May 5, 2024Updated last year
- Reformer, the efficient Transformer, in Pytorch☆2,190Jun 21, 2023Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆424Jul 27, 2022Updated 3 years ago
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Long Range Arena for Benchmarking Efficient Transformers☆787Dec 16, 2023Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- Longformer: The Long-Document Transformer☆2,194Feb 8, 2023Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,407Apr 26, 2025Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,839Apr 21, 2026Updated last week
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Jul 11, 2024Updated last year
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- list of efficient attention modules☆1,022Aug 23, 2021Updated 4 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,526Nov 18, 2020Updated 5 years ago
- Hopfield Networks is All You Need☆1,924Apr 23, 2023Updated 3 years ago
- Transformer training code for sequential tasks☆610Sep 14, 2021Updated 4 years ago
- ☆391Oct 18, 2023Updated 2 years ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,468Apr 19, 2026Updated last week
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,128Apr 20, 2022Updated 4 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Structured state space sequence models☆2,890Jul 17, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆3,699Sep 21, 2022Updated 3 years ago
- An implementation of the efficient attention module.☆329Nov 30, 2020Updated 5 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆270Aug 10, 2021Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Sep 6, 2023Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,212Sep 30, 2025Updated 7 months ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,113Jan 4, 2022Updated 4 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,168Mar 22, 2024Updated 2 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,202Aug 22, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,932Feb 14, 2023Updated 3 years ago
- Fast and memory-efficient exact attention☆23,563Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,437Apr 21, 2026Updated last week
- ☆220Jun 8, 2020Updated 5 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,546Jul 18, 2025Updated 9 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,950Apr 20, 2026Updated last week
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 4 years ago