LambdaLabsML / distributed-training-guide
Best practices & guides on how to write distributed pytorch training code
☆401Updated 2 months ago
Alternatives and similar repositories for distributed-training-guide:
Users that are interested in distributed-training-guide are comparing it to the libraries listed below
- ☆169Updated 2 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆786Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purpose☆991Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆178Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆131Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆534Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆286Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆317Updated 4 months ago
- ☆153Updated last year
- For optimization algorithm research and development.☆508Updated this week
- ☆424Updated 9 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆240Updated last week
- Helpful tools and examples for working with flex-attention☆726Updated last week
- ☆155Updated 3 months ago
- Scalable and Performant Data Loading☆237Updated this week
- Annotated version of the Mamba paper☆483Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆337Updated last month
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆577Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆511Updated 5 months ago
- What would you do with 1000 H100s...☆1,038Updated last year
- LoRA and DoRA from Scratch Implementations☆202Updated last year
- A bibliography and survey of the papers surrounding o1☆1,187Updated 5 months ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆385Updated 2 weeks ago
- LLM KV cache compression made easy☆458Updated last week
- Building blocks for foundation models.☆482Updated last year
- Minimalistic large language model 3D-parallelism training☆1,793Updated this week
- code for training & evaluating Contextual Document Embedding models☆181Updated last week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆325Updated this week
- ☆302Updated 10 months ago
- Friends of OLMo and their links.☆274Updated 4 months ago