XuezheMax / fairseq-apollo
FairSeq repo with Apollo optimizer
β110Updated last year
Alternatives and similar repositories for fairseq-apollo:
Users that are interested in fairseq-apollo are comparing it to the libraries listed below
- PyTorch reimplementation of REALM and ORQAβ22Updated 3 years ago
- β44Updated 4 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration πβ114Updated 2 years ago
- β95Updated last year
- DEMix Layers for Modular Language Modelingβ53Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated last year
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β101Updated 4 years ago
- DisCo Transformer for Non-autoregressive MTβ78Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".β69Updated last year
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.β52Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)β72Updated 3 years ago
- DiffusER: Discrete Diffusion via Edit-based Reconstruction (Reid, Hellendoorn & Neubig, 2022)β54Updated 2 years ago
- β63Updated 2 years ago
- Efficient Transformers with Dynamic Token Poolingβ60Updated last year
- β99Updated 2 years ago
- β127Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β61Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyerβ55Updated 2 years ago
- β45Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 2 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β55Updated 4 years ago
- Code accompanying our papers on the "Generative Distributional Control" frameworkβ118Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.β137Updated last year
- β30Updated last year
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"β88Updated 4 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretrainingβ118Updated last year
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)β30Updated 3 years ago
- β47Updated 4 years ago
- β67Updated 2 years ago