LambdaLabsML / distributed-training-guideLinks
Best practices & guides on how to write distributed pytorch training code
☆433Updated 3 months ago
Alternatives and similar repositories for distributed-training-guide
Users that are interested in distributed-training-guide are comparing it to the libraries listed below
Sorting:
- ☆188Updated 3 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated last week
- Scalable and Performant Data Loading☆269Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,518Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆793Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆514Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆544Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 2 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆249Updated this week
- ☆474Updated 10 months ago
- For optimization algorithm research and development.☆518Updated this week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆380Updated last month
- PyTorch per step fault tolerance (actively under development)☆302Updated last week
- Implementation of Diffusion Transformer (DiT) in JAX☆276Updated 11 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆357Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated last month
- Annotated version of the Mamba paper☆482Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆333Updated 5 months ago
- LoRA and DoRA from Scratch Implementations☆203Updated last year
- ☆157Updated last year
- Recipes to scale inference-time compute of open models☆1,087Updated 2 weeks ago
- UNet diffusion model in pure CUDA☆606Updated 11 months ago
- Muon: An optimizer for hidden layers in neural networks☆678Updated last week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆464Updated last week
- Building blocks for foundation models.☆500Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆876Updated last month
- Helpful tools and examples for working with flex-attention☆811Updated this week
- What would you do with 1000 H100s...☆1,048Updated last year
- code for training & evaluating Contextual Document Embedding models☆191Updated 3 weeks ago
- LLM KV cache compression made easy☆493Updated 3 weeks ago