XuezheMax / fairseq-apolloLinks
FairSeq repo with Apollo optimizer
☆114Updated 2 years ago
Alternatives and similar repositories for fairseq-apollo
Users that are interested in fairseq-apollo are comparing it to the libraries listed below
Sorting:
- ☆44Updated 5 years ago
- ☆98Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 3 years ago
- Efficient Transformers with Dynamic Token Pooling☆66Updated 2 years ago
- PyTorch reimplementation of REALM and ORQA☆22Updated 3 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- ☆45Updated 4 years ago
- DisCo Transformer for Non-autoregressive MT☆77Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 5 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆98Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆57Updated 3 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆74Updated 3 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated 2 years ago
- ☆62Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆125Updated 5 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 3 years ago
- ☆99Updated 3 years ago
- Code accompanying our papers on the "Generative Distributional Control" framework☆118Updated 3 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- ☆52Updated 2 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆78Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 3 years ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)☆55Updated 5 months ago