Fully featured implementation of Routing Transformer
☆300Nov 6, 2021Updated 4 years ago
Alternatives and similar repositories for routing-transformer
Users that are interested in routing-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆269Aug 10, 2021Updated 4 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,192Jun 21, 2023Updated 2 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆826May 5, 2024Updated last year
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Oct 4, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,765Mar 23, 2023Updated 3 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Implementation of Linformer for Pytorch☆305Jan 5, 2024Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆423Jul 27, 2022Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 8 months ago
- Another attempt at a long-context / efficient transformer by me☆38Apr 11, 2022Updated 3 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Jul 11, 2024Updated last year
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 3 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆67Jan 10, 2023Updated 3 years ago
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Relative Positional Encoding for Transformers with Linear Complexity☆65Apr 1, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,806Updated this week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of Multistream Transformers in Pytorch☆54Jul 31, 2021Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆98Dec 31, 2021Updated 4 years ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper☆155Apr 27, 2021Updated 4 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆201Mar 24, 2021Updated 5 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up proc…☆196Mar 27, 2021Updated 5 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆25Apr 4, 2022Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Oct 30, 2020Updated 5 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆17Nov 9, 2021Updated 4 years ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,196Aug 22, 2023Updated 2 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Apr 5, 2021Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 5 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆465Jun 22, 2024Updated last year