My take on a practical implementation of Linformer for Pytorch.
☆421Jul 27, 2022Updated 3 years ago
Alternatives and similar repositories for linformer-pytorch
Users that are interested in linformer-pytorch are comparing it to the libraries listed below
Sorting:
- Pytorch library for fast transformer implementations☆1,763Mar 23, 2023Updated 2 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Jun 23, 2020Updated 5 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Jul 11, 2024Updated last year
- ☆221Jun 8, 2020Updated 5 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- Longformer: The Long-Document Transformer☆2,189Feb 8, 2023Updated 3 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,068Aug 9, 2024Updated last year
- list of efficient attention modules☆1,023Aug 23, 2021Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated 11 months ago
- Hopfield Networks is All You Need☆1,899Apr 23, 2023Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆782Dec 16, 2023Updated 2 years ago
- An implementation of the efficient attention module.☆329Nov 30, 2020Updated 5 years ago
- ☆388Oct 18, 2023Updated 2 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,611Aug 12, 2020Updated 5 years ago
- Implementation of RealFormer using pytorch☆101Dec 27, 2020Updated 5 years ago
- Fast Block Sparse Matrices for Pytorch☆549Jan 21, 2021Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- PyTorch Codes for Haar Graph Pooling☆11Feb 16, 2023Updated 3 years ago
- ☆3,687Sep 21, 2022Updated 3 years ago
- DisCo Transformer for Non-autoregressive MT☆77Jul 28, 2022Updated 3 years ago
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134May 29, 2024Updated last year
- torch-optimizer -- collection of optimizers for Pytorch☆3,163Mar 22, 2024Updated last year
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Jun 22, 2022Updated 3 years ago
- The implementation of the papers on dual learning of natural language understanding and generation. (ACL2019,2020; Findings of EMNLP 2020…☆67Oct 13, 2020Updated 5 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Apr 19, 2022Updated 3 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆58Jan 13, 2021Updated 5 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Jun 20, 2021Updated 4 years ago
- Structured state space sequence models☆2,854Jul 17, 2024Updated last year
- Transformer training code for sequential tasks☆609Sep 14, 2021Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Oct 4, 2022Updated 3 years ago
- Official DeiT repository☆4,326Mar 15, 2024Updated last year
- Paper and code for Gradient Descent: The Ultimate Optimizer☆24Oct 3, 2023Updated 2 years ago
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆461Jun 22, 2024Updated last year
- Simple XLNet implementation with Pytorch Wrapper☆580Jul 3, 2019Updated 6 years ago
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆302Mar 15, 2023Updated 2 years ago
- This repository contains the code for "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Representations".☆64Aug 13, 2020Updated 5 years ago