SeanNaren / minGPT
A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!
☆111Updated last year
Alternatives and similar repositories for minGPT:
Users that are interested in minGPT are comparing it to the libraries listed below
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆67Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- ☆96Updated last year
- ☆72Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆26Updated 2 years ago
- ☆73Updated 11 months ago
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆24Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆130Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 7 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Transformers at any scale☆41Updated last year
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆134Updated 11 months ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆57Updated last year
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 2 years ago
- ☆67Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ☆97Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆312Updated last year
- ☆96Updated last year
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- ☆97Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago