Transformer based on a variant of attention that is linear complexity in respect to sequence length
☆826May 5, 2024Updated last year
Alternatives and similar repositories for linear-attention-transformer
Users that are interested in linear-attention-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Pytorch library for fast transformer implementations☆1,765Mar 23, 2023Updated 3 years ago
- Implementation of Linformer for Pytorch☆305Jan 5, 2024Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,174Feb 2, 2022Updated 4 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,192Jun 21, 2023Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆423Jul 27, 2022Updated 3 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 8 months ago
- An implementation of the efficient attention module.☆328Nov 30, 2020Updated 5 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,806Updated this week
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆269Aug 10, 2021Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Oct 4, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,532Nov 18, 2020Updated 5 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,196Aug 22, 2023Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Nov 1, 2025Updated 4 months ago
- list of efficient attention modules☆1,022Aug 23, 2021Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 5 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- Longformer: The Long-Document Transformer☆2,188Feb 8, 2023Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆787Dec 16, 2023Updated 2 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Jan 5, 2023Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch☆1,876Jul 15, 2024Updated last year
- torch-optimizer -- collection of optimizers for Pytorch☆3,169Mar 22, 2024Updated 2 years ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆848Sep 13, 2023Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 months ago
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆24,996Updated this week
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- PyTorch extensions for high performance and large scale training.☆3,404Apr 26, 2025Updated 11 months ago