xrsrke / pipegoose
Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*
โ81Updated last year
Alternatives and similar repositories for pipegoose:
Users that are interested in pipegoose are comparing it to the libraries listed below
- some common Huggingface transformers in maximal update parametrization (ยตP)โ78Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ121Updated 9 months ago
- โ75Updated 7 months ago
- A set of Python scripts that makes your experience on TPU betterโ48Updated 7 months ago
- โ57Updated 2 years ago
- โ42Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ94Updated 2 months ago
- โ53Updated last year
- Experiments for efforts to train a new and improved t5โ77Updated 10 months ago
- โ20Updated last year
- Experiments with generating opensource language model assistantsโ97Updated last year
- Collection of autoregressive model implementationโ81Updated this week
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizersโ80Updated 7 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ48Updated last year
- โ78Updated 10 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated last year
- Understand and test language model architectures on synthetic tasks.โ181Updated last month
- Minimal but scalable implementation of large language models in JAXโ31Updated 3 months ago
- Code repository for the c-BTM paperโ105Updated last year
- Experiment of using Tangent to autodiff tritonโ75Updated last year
- โ49Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasksโ96Updated 10 months ago
- โ121Updated this week
- A MAD laboratory to improve AI architecture designs ๐งชโ102Updated last month
- โ88Updated 8 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ115Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionโ82Updated 2 weeks ago
- Automatically take good care of your preemptible TPUsโ36Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.โ82Updated last year
- โ92Updated last year