lucidrains / mlm-pytorchLinks
An implementation of masked language modeling for Pytorch, made as concise and simple as possible
☆179Updated 2 years ago
Alternatives and similar repositories for mlm-pytorch
Users that are interested in mlm-pytorch are comparing it to the libraries listed below
Sorting:
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆231Updated 2 years ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆164Updated 4 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- Language Modeling Example with Transformers and PyTorch Lighting☆65Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆330Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆544Updated last year
- Fully featured implementation of Routing Transformer☆296Updated 3 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- pytorch; mask language model ; bert☆72Updated 5 years ago
- ☆363Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Updated 2 years ago
- Checkout the new version at the link!☆22Updated 4 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆314Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆165Updated last year
- ☆247Updated 5 years ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆83Updated last year
- ☆219Updated 5 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- Transformers for Longer Sequences☆617Updated 3 years ago
- Official codebase for Pretrained Transformers as Universal Computation Engines.☆247Updated 3 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆39Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- An implementation of local windowed attention for language modeling☆479Updated 2 months ago
- Optimus: the first large-scale pre-trained VAE language model☆391Updated 2 years ago