sacmehta / delightView external linksLinks
DeLighT: Very Deep and Light-Weight Transformers
☆469Oct 16, 2020Updated 5 years ago
Alternatives and similar repositories for delight
Users that are interested in delight are comparing it to the libraries listed below
Sorting:
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- ☆221Jun 8, 2020Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Jul 26, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,761Mar 23, 2023Updated 2 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Apr 26, 2021Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆332May 31, 2022Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated 10 months ago
- Longformer: The Long-Document Transformer☆2,184Feb 8, 2023Updated 3 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 3 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Jun 22, 2022Updated 3 years ago
- Transformer training code for sequential tasks☆610Sep 14, 2021Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- Hopfield Networks is All You Need☆1,897Apr 23, 2023Updated 2 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Aug 17, 2022Updated 3 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆253Nov 8, 2021Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆337Jul 14, 2024Updated last year
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Feb 2, 2022Updated 4 years ago
- [NeurIPS‘2021] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang☆1,690Nov 3, 2022Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Jun 20, 2021Updated 4 years ago
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆303Mar 15, 2023Updated 2 years ago
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"☆1,628Jun 12, 2023Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,193Jun 21, 2023Updated 2 years ago
- Official DeiT repository☆4,323Mar 15, 2024Updated last year
- A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…☆246Sep 17, 2021Updated 4 years ago
- Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.☆1,155Feb 20, 2024Updated last year
- My take on a practical implementation of Linformer for Pytorch.☆422Jul 27, 2022Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆550Jan 21, 2021Updated 5 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Jun 9, 2021Updated 4 years ago
- (ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search☆142Dec 6, 2021Updated 4 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".☆80Jun 3, 2021Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 3 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆142Dec 30, 2021Updated 4 years ago
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,068Aug 9, 2024Updated last year
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆65Aug 31, 2021Updated 4 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆298Sep 11, 2021Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 2 years ago
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆606Jun 15, 2022Updated 3 years ago