hundredblocks / large-model-parallelismLinks
Functional local implementations of main model parallelism approaches
β96Updated 2 years ago
Alternatives and similar repositories for large-model-parallelism
Users that are interested in large-model-parallelism are comparing it to the libraries listed below
Sorting:
- git extension for {collaborative, communal, continual} model developmentβ215Updated 10 months ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated last year
- JAX implementation of the Llama 2 modelβ218Updated last year
- A puzzle to learn about promptingβ135Updated 2 years ago
- Train very large language models in Jax.β209Updated last year
- β94Updated 2 years ago
- Inference code for LLaMA models in JAXβ119Updated last year
- Automatic gradient descentβ213Updated 2 years ago
- ML/DL Math and Method notesβ64Updated last year
- An interactive exploration of Transformer programming.β269Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)β105Updated 2 years ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.β293Updated last year
- A case study of efficient training of large language models using commodity hardware.β68Updated 3 years ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β92Updated 2 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β46Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebsβ60Updated 2 weeks ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.β83Updated 2 years ago
- β62Updated last year
- β91Updated last year
- β144Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (Β΅P)β82Updated 3 years ago
- Automatically take good care of your preemptible TPUsβ36Updated 2 years ago
- β53Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ50Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)β37Updated 2 years ago
- nanoGPT-like codebase for LLM trainingβ107Updated 4 months ago
- A library to create and manage configuration files, especially for machine learning projects.β79Updated 3 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayβ256Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inferenceβ¦β211Updated 4 months ago