LambdaLabsML / distributed-training-guideLinks
Best practices & guides on how to write distributed pytorch training code
☆536Updated 3 weeks ago
Alternatives and similar repositories for distributed-training-guide
Users that are interested in distributed-training-guide are comparing it to the libraries listed below
Sorting:
- ☆225Updated 3 weeks ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆822Updated 3 months ago
- ☆545Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated 2 weeks ago
- ☆525Updated 3 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 5 months ago
- What would you do with 1000 H100s...☆1,121Updated last year
- Building blocks for foundation models.☆569Updated last year
- Annotated version of the Mamba paper☆490Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆428Updated 8 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆582Updated 3 months ago
- Open-source framework for the research and development of foundation models.☆611Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,892Updated 2 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆452Updated this week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆683Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆543Updated 6 months ago
- For optimization algorithm research and development.☆544Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆556Updated last month
- Implementation of Diffusion Transformer (DiT) in JAX☆294Updated last year
- UNet diffusion model in pure CUDA☆654Updated last year
- Load compute kernels from the Hub☆326Updated last week
- Helpful tools and examples for working with flex-attention☆1,053Updated this week
- Scalable and Performant Data Loading☆335Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated last week
- Where GPUs get cooked 👩🍳🔥☆310Updated 2 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆926Updated 2 weeks ago
- Slides, notes, and materials for the workshop☆334Updated last year
- ☆894Updated 2 weeks ago
- ☆177Updated last year