XuezheMax / fairseq-apolloLinks
FairSeq repo with Apollo optimizer
β114Updated 2 years ago
Alternatives and similar repositories for fairseq-apollo
Users that are interested in fairseq-apollo are comparing it to the libraries listed below
Sorting:
- β99Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration πβ116Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 4 years ago
- β44Updated 5 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 5 years ago
- DEMix Layers for Modular Language Modelingβ54Updated 4 years ago
- Code accompanying our papers on the "Generative Distributional Control" frameworkβ118Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 5 years ago
- β45Updated 4 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)β75Updated 4 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generationβ97Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.β139Updated 2 years ago
- DisCo Transformer for Non-autoregressive MTβ77Updated 3 years ago
- β62Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 3 years ago
- Implementation of Mixout with PyTorchβ75Updated 3 years ago
- Transformers without Tears: Improving the Normalization of Self-Attentionβ134Updated last year
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.β57Updated 3 years ago
- Efficient Transformers with Dynamic Token Poolingβ67Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".β69Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretrainingβ118Updated 2 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)β29Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated 4 months ago
- Code for the paper "Are Sixteen Heads Really Better than One?"β174Updated 5 years ago
- β98Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modelingβ34Updated 4 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.β77Updated 3 years ago
- β221Updated 5 years ago
- Official code for the ICLR 2020 paper 'ARE PPE-TRAINED LANGUAGE MODELS AWARE OF PHRASES? SIMPLE BUT STRONG BASELINES FOR GRAMMAR INDCUTIOβ¦β30Updated 2 years ago