Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
☆270Aug 10, 2021Updated 4 years ago
Alternatives and similar repositories for sinkhorn-transformer
Users that are interested in sinkhorn-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆424Jul 27, 2022Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,190Jun 21, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Implementation of Linformer for Pytorch☆305Jan 5, 2024Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- High performance pytorch modules☆18Jan 14, 2023Updated 3 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 3 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆469Jun 22, 2024Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆825May 5, 2024Updated last year
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Pytorch implementation of Compressive Transformers, from Deepmind☆164Oct 4, 2021Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,190Feb 8, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- ☆220Jun 8, 2020Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Apr 5, 2021Updated 5 years ago
- Generalizing Natural Language Analysis through Span-relation Representations☆91Sep 22, 2025Updated 6 months ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Feb 19, 2021Updated 5 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,816Mar 27, 2026Updated 2 weeks ago
- ☆19Oct 26, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper☆155Apr 27, 2021Updated 4 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆87Nov 1, 2025Updated 5 months ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Nov 22, 2022Updated 3 years ago
- Sparse and structured neural attention mechanisms☆224Aug 31, 2020Updated 5 years ago
- Neural Text Generation with Unlikelihood Training☆311Aug 31, 2021Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Aug 3, 2021Updated 4 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Code for Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution (ACL2021)☆13Jun 2, 2021Updated 4 years ago
- Code for paper by Bamler & Mandt, "Extreme Classification via Adversarial Softmax Approximation" (ICLR 2020)☆14Apr 8, 2020Updated 6 years ago
- ☆65Apr 8, 2020Updated 6 years ago
- ☆21Mar 15, 2023Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Jan 5, 2023Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 8 months ago