LiyuanLucasLiu / Transformer-ClinicView external linksLinks
Understanding the Difficulty of Training Transformers
☆332May 31, 2022Updated 3 years ago
Alternatives and similar repositories for Transformer-Clinic
Users that are interested in Transformer-Clinic are comparing it to the libraries listed below
Sorting:
- ☆32Sep 27, 2021Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆130Mar 20, 2023Updated 2 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆359Feb 22, 2022Updated 3 years ago
- DisCo Transformer for Non-autoregressive MT☆77Jul 28, 2022Updated 3 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Feb 1, 2021Updated 5 years ago
- Transformer training code for sequential tasks☆610Sep 14, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,761Mar 23, 2023Updated 2 years ago
- Tracking the progress in non-autoregressive generation (translation, transcription, etc.)☆302Mar 15, 2023Updated 2 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆142Dec 30, 2021Updated 4 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆469Oct 16, 2020Updated 5 years ago
- A masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…☆246Sep 17, 2021Updated 4 years ago
- Prune a model while finetuning or training.☆406Jun 21, 2022Updated 3 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,123Apr 20, 2022Updated 3 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Nov 2, 2020Updated 5 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆138Sep 6, 2023Updated 2 years ago
- MPNet: Masked and Permuted Pre-training for Language Understanding https://arxiv.org/pdf/2004.09297.pdf☆298Sep 11, 2021Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Jul 11, 2024Updated last year
- Longformer: The Long-Document Transformer☆2,186Feb 8, 2023Updated 3 years ago
- Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"☆442Feb 2, 2022Updated 4 years ago
- Pre-trained V+L Data Preparation☆46Jun 2, 2020Updated 5 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,924Feb 14, 2023Updated 3 years ago
- Combining encoder-based language models☆11Nov 11, 2021Updated 4 years ago
- Reparameterize your PyTorch modules☆71Dec 31, 2020Updated 5 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 4 years ago
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- Implementation of our paper "Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation" in EMNLP-2020.☆23Aug 20, 2021Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆709Mar 21, 2025Updated 10 months ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Jun 12, 2023Updated 2 years ago
- ☆221Jun 8, 2020Updated 5 years ago
- Interpretable Evaluation for (Almost) All NLP Tasks☆195Sep 22, 2025Updated 4 months ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,627Mar 25, 2022Updated 3 years ago
- PyProf2: PyTorch Profiling tool☆82Jun 25, 2020Updated 5 years ago
- ☆41Feb 12, 2019Updated 7 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆120Nov 10, 2020Updated 5 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆57Jan 1, 2021Updated 5 years ago
- The implementation of DeBERTa☆2,192Sep 29, 2023Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,193Jun 21, 2023Updated 2 years ago
- The implementation of "Learning Deep Transformer Models for Machine Translation"☆116Jul 25, 2024Updated last year
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,123Nov 28, 2022Updated 3 years ago