An implementation of Performer, a linear attention-based transformer, in Pytorch
☆1,174Feb 2, 2022Updated 4 years ago
Alternatives and similar repositories for performer-pytorch
Users that are interested in performer-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,191Jun 21, 2023Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆824May 5, 2024Updated last year
- Implementation of Linformer for Pytorch☆305Jan 5, 2024Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆424Jul 27, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,532Nov 18, 2020Updated 5 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆787Dec 16, 2023Updated 2 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,196Aug 22, 2023Updated 2 years ago
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- ☆389Oct 18, 2023Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,812Mar 27, 2026Updated last week
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 8 months ago
- Implementation of Feedback Transformer in Pytorch☆108Mar 2, 2021Updated 5 years ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,445Feb 20, 2026Updated last month
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,070Aug 9, 2024Updated last year
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,188Feb 8, 2023Updated 3 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆164Oct 4, 2021Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆270Aug 10, 2021Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- Google Research☆37,626Updated this week
- ☆221Jun 8, 2020Updated 5 years ago
- Implementation of Bottleneck Transformer in Pytorch☆677Sep 20, 2021Updated 4 years ago
- Hopfield Networks is All You Need☆1,908Apr 23, 2023Updated 2 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆38Dec 30, 2020Updated 5 years ago
- Implementation of the convolutional module from the Conformer paper, for use in Transformers☆433May 17, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- An implementation of the efficient attention module.☆329Nov 30, 2020Updated 5 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,628Feb 17, 2024Updated 2 years ago
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,990Updated this week
- Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper☆155Apr 27, 2021Updated 4 years ago
- GPT, but made only out of MLPs☆89May 25, 2021Updated 4 years ago
- Structured state space sequence models☆2,875Jul 17, 2024Updated last year
- Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch☆1,877Jul 15, 2024Updated last year