LambdaLabsML / distributed-training-guideLinks
Best practices & guides on how to write distributed pytorch training code
☆441Updated 4 months ago
Alternatives and similar repositories for distributed-training-guide
Users that are interested in distributed-training-guide are comparing it to the libraries listed below
Sorting:
- ☆193Updated 4 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆801Updated 2 weeks ago
- For optimization algorithm research and development.☆521Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆152Updated 3 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆185Updated 3 weeks ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆476Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆556Updated last week
- UNet diffusion model in pure CUDA☆608Updated 11 months ago
- What would you do with 1000 H100s...☆1,055Updated last year
- ☆504Updated 11 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆278Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆253Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,548Updated 3 weeks ago
- Annotated version of the Mamba paper☆485Updated last year
- Building blocks for foundation models.☆511Updated last year
- System 2 Reasoning Link Collection☆838Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆339Updated 6 months ago
- A bibliography and survey of the papers surrounding o1☆1,199Updated 7 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆368Updated 3 months ago
- Helpful tools and examples for working with flex-attention☆831Updated 2 weeks ago
- Puzzles for exploring transformers☆350Updated 2 years ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆521Updated last month
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆399Updated 2 weeks ago
- ☆471Updated last week
- Minimalistic large language model 3D-parallelism training☆1,942Updated this week
- Scalable and Performant Data Loading☆278Updated this week
- ☆435Updated 8 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated last month
- Fast bare-bones BPE for modern tokenizer training☆159Updated 2 months ago
- Normalized Transformer (nGPT)☆184Updated 7 months ago