lucidrains / mlm-pytorch
An implementation of masked language modeling for Pytorch, made as concise and simple as possible
☆178Updated last year
Alternatives and similar repositories for mlm-pytorch:
Users that are interested in mlm-pytorch are comparing it to the libraries listed below
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆221Updated last year
- Sequence modeling with Mega.☆297Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆512Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorch☆117Updated 3 years ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆371Updated 10 months ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆155Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- pytorch; mask language model ; bert☆72Updated 5 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆310Updated last year
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆325Updated last year
- Optimus: the first large-scale pre-trained VAE language model☆379Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 4 months ago
- Language Modeling Example with Transformers and PyTorch Lighting☆65Updated 4 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- FairSeq repo with Apollo optimizer☆109Updated last year
- Implementation of Feedback Transformer in Pytorch☆105Updated 3 years ago
- An implementation of local windowed attention for language modeling☆408Updated last week
- Fully featured implementation of Routing Transformer☆288Updated 3 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Trains Transformer model variants. Data isn't shuffled between batches.☆139Updated 2 years ago
- ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework☆258Updated last year
- ☆336Updated 9 months ago
- Implementation of Fast Transformer in Pytorch☆172Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆256Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆226Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆55Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year