XuezheMax / fairseq-apolloLinks
FairSeq repo with Apollo optimizer
β114Updated last year
Alternatives and similar repositories for fairseq-apollo
Users that are interested in fairseq-apollo are comparing it to the libraries listed below
Sorting:
- β44Updated 5 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration πβ114Updated 3 years ago
- β98Updated 2 years ago
- Efficient Transformers with Dynamic Token Poolingβ64Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β138Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 5 years ago
- β45Updated 4 years ago
- DEMix Layers for Modular Language Modelingβ54Updated 4 years ago
- PyTorch reimplementation of REALM and ORQAβ22Updated 3 years ago
- Code accompanying our papers on the "Generative Distributional Control" frameworkβ118Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.β138Updated 2 years ago
- β129Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- Cascaded Text Generation with Markov Transformersβ129Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)β74Updated 3 years ago
- Improving Neural Text Generation with Reinforcement Learningβ22Updated 4 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.β78Updated 3 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attentionβ49Updated 5 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".β69Updated last year
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated last month
- Transformers without Tears: Improving the Normalization of Self-Attentionβ133Updated last year
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 3 years ago
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"β89Updated 4 years ago
- β31Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruningβ74Updated 4 years ago
- β62Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)β63Updated 3 years ago
- DisCo Transformer for Non-autoregressive MTβ77Updated 3 years ago
- Implementation of Mixout with PyTorchβ75Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorchβ124Updated 5 years ago