SeanNaren / minGPT
A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!
☆111Updated last year
Related projects ⓘ
Alternatives and complementary repositories for minGPT
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- ☆67Updated 2 years ago
- ☆71Updated 6 months ago
- FairSeq repo with Apollo optimizer☆108Updated 10 months ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆100Updated 4 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆112Updated 2 years ago
- ☆93Updated last year
- ☆64Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆23Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆67Updated 3 weeks ago
- ☆95Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆26Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆124Updated 6 months ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆58Updated last year
- A library to create and manage configuration files, especially for machine learning projects.☆77Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆101Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆68Updated 9 months ago
- Recurrent Memory Transformer☆150Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆126Updated last year
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 2 years ago
- ☆94Updated last year
- Transformers at any scale☆41Updated 9 months ago
- ☆73Updated last year