hundredblocks / large-model-parallelismLinks
Functional local implementations of main model parallelism approaches
β96Updated 2 years ago
Alternatives and similar repositories for large-model-parallelism
Users that are interested in large-model-parallelism are comparing it to the libraries listed below
Sorting:
- git extension for {collaborative, communal, continual} model developmentβ215Updated 11 months ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated last year
- β94Updated 2 years ago
- Train very large language models in Jax.β209Updated 2 years ago
- JAX implementation of the Llama 2 modelβ216Updated last year
- A puzzle to learn about promptingβ135Updated 2 years ago
- ML/DL Math and Method notesβ64Updated last year
- An interactive exploration of Transformer programming.β269Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)β105Updated 2 years ago
- β144Updated 2 years ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β92Updated 2 years ago
- A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)β37Updated 2 years ago
- Inference code for LLaMA models in JAXβ119Updated last year
- Automatic gradient descentβ215Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ51Updated last year
- β166Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (Β΅P)β86Updated 3 years ago
- β91Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inferenceβ¦β215Updated 4 months ago
- β62Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebsβ60Updated last week
- β61Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.β295Updated last year
- Supercharge huggingface transformers with model parallelism.β77Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.β82Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- A case study of efficient training of large language models using commodity hardware.β68Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.β80Updated 3 years ago
- β22Updated 2 years ago
- Automatically take good care of your preemptible TPUsβ37Updated 2 years ago