Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
☆270Aug 10, 2021Updated 4 years ago
Alternatives and similar repositories for sinkhorn-transformer
Users that are interested in sinkhorn-transformer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fully featured implementation of Routing Transformer☆301Nov 6, 2021Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆424Jul 27, 2022Updated 3 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,190Jun 21, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Implementation of Linformer for Pytorch☆306Jan 5, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Transformer training code for sequential tasks☆610Sep 14, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,769Mar 23, 2023Updated 3 years ago
- High performance pytorch modules☆18Jan 14, 2023Updated 3 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 3 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆469Jun 22, 2024Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆829May 5, 2024Updated last year
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Pytorch implementation of Compressive Transformers, from Deepmind☆165Oct 4, 2021Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,194Feb 8, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- ☆220Jun 8, 2020Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Apr 5, 2021Updated 5 years ago
- Generalizing Natural Language Analysis through Span-relation Representations☆91Sep 22, 2025Updated 7 months ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch