williamFalcon / minGPTLinks
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
β27Updated 3 years ago
Alternatives and similar repositories for minGPT
Users that are interested in minGPT are comparing it to the libraries listed below
Sorting:
- LM Pretraining with PyTorch/TPUβ137Updated 6 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 4 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorchβ118Updated 4 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β112Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β102Updated 5 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- Shared code for training sentence embeddings with Flax / JAXβ28Updated 4 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 5 years ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.β36Updated 2 years ago
- β75Updated 4 years ago
- Official Pytorch implementation of (Roles and Utilization of Attention Heads in Transformer-based Neural Language Models), ACL 2020β16Updated 10 months ago
- ML Reproducibility Challenge 2020: Electra reimplementation using PyTorch and Transformersβ12Updated 4 years ago
- β67Updated 3 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β57Updated 5 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ235Updated 2 years ago
- β62Updated 3 years ago
- Implementation of Mixout with PyTorchβ75Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 3 years ago
- Improving Neural Text Generation with Reinforcement Learningβ22Updated 5 years ago
- PyTorch implementation of NAACL 2021 paper "Multi-view Subword Regularization"β26Updated 4 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 5 years ago
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- Viewer for the π€ datasets library.β86Updated 4 years ago
- Implementation of the paper 'Plug and Play Autoencoders for Conditional Text Generation'β43Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attentionβ134Updated last year
- β87Updated 3 years ago
- β13Updated 5 years ago