xrsrke / pipegooseLinks
Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*
โ85Updated last year
Alternatives and similar repositories for pipegoose
Users that are interested in pipegoose are comparing it to the libraries listed below
Sorting:
- some common Huggingface transformers in maximal update parametrization (ยตP)โ81Updated 3 years ago
- โ79Updated last year
- โ20Updated 2 years ago
- JAX implementation of the Llama 2 modelโ219Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ129Updated last year
- โ61Updated 3 years ago
- โ45Updated last year
- A set of Python scripts that makes your experience on TPU betterโ55Updated last year
- โ53Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- Inference code for LLaMA models in JAXโ118Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ141Updated 2 weeks ago
- Experiments with generating opensource language model assistantsโ97Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.โ219Updated last month
- Implementation of the Llama architecture with RLHF + Q-learningโ165Updated 5 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingโ50Updated last year
- Experiment of using Tangent to autodiff tritonโ79Updated last year
- Collection of autoregressive model implementationโ85Updated 2 months ago
- โ166Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasksโ97Updated last year
- โ92Updated last year
- โ74Updated last year
- Multipack distributed sampler for fast padding-free training of LLMsโ194Updated 11 months ago
- โ81Updated last year
- OSLO: Open Source for Large-scale Optimizationโ175Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.โ82Updated last year
- A MAD laboratory to improve AI architecture designs ๐งชโ123Updated 6 months ago
- nanoGPT-like codebase for LLM trainingโ99Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS โฆโ59Updated 9 months ago