Pytorch library for fast transformer implementations
☆1,763Mar 23, 2023Updated 2 years ago
Alternatives and similar repositories for fast-transformers
Users that are interested in fast-transformers are comparing it to the libraries listed below
Sorting:
- My take on a practical implementation of Linformer for Pytorch.☆423Jul 27, 2022Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆786Dec 16, 2023Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- Longformer: The Long-Document Transformer☆2,189Feb 8, 2023Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,403Apr 26, 2025Updated 10 months ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Jul 11, 2024Updated last year
- Fast Block Sparse Matrices for Pytorch☆549Jan 21, 2021Updated 5 years ago
- list of efficient attention modules☆1,022Aug 23, 2021Updated 4 years ago
- Hopfield Networks is All You Need☆1,907Apr 23, 2023Updated 2 years ago
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- ☆388Oct 18, 2023Updated 2 years ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,430Feb 20, 2026Updated last month
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,124Apr 20, 2022Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Structured state space sequence models☆2,869Jul 17, 2024Updated last year
- ☆3,695Sep 21, 2022Updated 3 years ago
- An implementation of the efficient attention module.☆328Nov 30, 2020Updated 5 years ago
- [NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".☆50Sep 6, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,190Sep 30, 2025Updated 5 months ago
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆2,111Jan 4, 2022Updated 4 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,167Mar 22, 2024Updated 2 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,927Feb 14, 2023Updated 3 years ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,373Updated this week
- ☆221Jun 8, 2020Updated 5 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 8 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,936Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,926Mar 10, 2026Updated last week
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆229Apr 18, 2022Updated 3 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,302May 16, 2023Updated 2 years ago
- On the Variance of the Adaptive Learning Rate and Beyond☆2,549Jul 31, 2021Updated 4 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Nov 27, 2021Updated 4 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,068Aug 9, 2024Updated last year
- Fast, differentiable sorting and ranking in PyTorch☆853Mar 3, 2026Updated 2 weeks ago
- Official DeiT repository☆4,327Mar 15, 2024Updated 2 years ago
- Google Research☆37,494Updated this week