My take on a practical implementation of Linformer for Pytorch.
☆424Jul 27, 2022Updated 3 years ago
Alternatives and similar repositories for linformer-pytorch
Users that are interested in linformer-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of Linformer for Pytorch☆306Jan 5, 2024Updated 2 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆270Aug 10, 2021Updated 4 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆826May 5, 2024Updated last year
- Reformer, the efficient Transformer, in Pytorch☆2,189Jun 21, 2023Updated 2 years ago
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Jun 23, 2020Updated 5 years ago
- ☆220Jun 8, 2020Updated 5 years ago
- Longformer: The Long-Document Transformer☆2,192Feb 8, 2023Updated 3 years ago
- list of efficient attention modules☆1,021Aug 23, 2021Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆86Mar 1, 2020Updated 6 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆126Nov 13, 2020Updated 5 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,072Aug 9, 2024Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆788Dec 16, 2023Updated 2 years ago
- ☆390Oct 18, 2023Updated 2 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated last year
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- ☆3,699Sep 21, 2022Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- An implementation of the efficient attention module.☆329Nov 30, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Hopfield Networks is All You Need☆1,917Apr 23, 2023Updated 2 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Implementation of LogAvgExp for Pytorch☆37Apr 10, 2025Updated last year
- Implementation of RealFormer using pytorch☆101Dec 27, 2020Updated 5 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 4 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆59Jan 13, 2021Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆135May 29, 2024Updated last year
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Adaptive Sparse ViT☆16Aug 1, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- The implementation of the papers on dual learning of natural language understanding and generation. (ACL2019,2020; Findings of EMNLP 2020…☆67Oct 13, 2020Updated 5 years ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,170Mar 22, 2024Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,405Apr 26, 2025Updated 11 months ago
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 9 months ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,200Aug 22, 2023Updated 2 years ago
- Structured state space sequence models☆2,883Jul 17, 2024Updated last year