williamFalcon / minGPTLinks
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
β27Updated 3 years ago
Alternatives and similar repositories for minGPT
Users that are interested in minGPT are comparing it to the libraries listed below
Sorting:
- LM Pretraining with PyTorch/TPUβ135Updated 5 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β112Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 3 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorchβ117Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β101Updated 4 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- ML Reproducibility Challenge 2020: Electra reimplementation using PyTorch and Transformersβ12Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ155Updated last year
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 4 years ago
- Official Pytorch implementation of (Roles and Utilization of Attention Heads in Transformer-based Neural Language Models), ACL 2020β16Updated 4 months ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.β36Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorchβ227Updated 2 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 3 years ago
- β67Updated 2 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer β¦β55Updated 4 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- β62Updated 3 years ago
- Language Modeling Example with Transformers and PyTorch Lightingβ65Updated 4 years ago
- Distillation of BERT model with catalyst frameworkβ78Updated 2 years ago
- β75Updated 4 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- β47Updated 5 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ136Updated last year
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- β100Updated 2 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β38Updated 4 years ago
- Pipeline for pulling and processing online language model pretraining data from the webβ178Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.β82Updated 10 months ago