SeanNaren / minGPT
A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!
☆111Updated last year
Alternatives and similar repositories for minGPT:
Users that are interested in minGPT are comparing it to the libraries listed below
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- ☆93Updated last year
- ☆65Updated 2 years ago
- 🤗 Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.☆14Updated this week
- FairSeq repo with Apollo optimizer☆109Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆113Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- ☆96Updated 2 years ago
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆23Updated 2 years ago
- ☆67Updated 2 years ago
- ☆72Updated 9 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆26Updated 2 years ago
- ☆96Updated last year
- Implementation of a Transformer, but completely in Triton☆257Updated 2 years ago
- Transformers at any scale☆41Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆82Updated 2 weeks ago
- ☆88Updated 8 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 5 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Recurrent Memory Transformer☆149Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆127Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated 10 months ago
- A framework for few-shot evaluation of autoregressive language models.☆102Updated last year
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆157Updated 2 years ago
- ☆73Updated last year
- Scalable training for dense retrieval models.☆275Updated last year