lucidrains / mlm-pytorchLinks
An implementation of masked language modeling for Pytorch, made as concise and simple as possible
☆181Updated last year
Alternatives and similar repositories for mlm-pytorch
Users that are interested in mlm-pytorch are comparing it to the libraries listed below
Sorting:
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆227Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorch☆117Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆329Updated last year
- Language Modeling Example with Transformers and PyTorch Lighting☆65Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- pytorch; mask language model ; bert☆72Updated 5 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆158Updated 3 years ago
- Sequence modeling with Mega.☆295Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆530Updated last year
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Optimus: the first large-scale pre-trained VAE language model☆385Updated last year
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated 2 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated 9 months ago
- ☆218Updated 4 years ago
- A library for making Transformer Variational Autoencoders. (Extends the Huggingface/transformers library.)☆140Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- Checkout the new version at the link!☆22Updated 4 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 3 years ago
- Transformer-based Conditional Variational Autoencoder for Controllable Story Generation☆155Updated 2 years ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆609Updated 2 years ago
- ☆319Updated 3 years ago