Best practices & guides on how to write distributed pytorch training code
☆598Oct 22, 2025Updated 5 months ago
Alternatives and similar repositories for distributed-training-guide
Users that are interested in distributed-training-guide are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Nov 28, 2025Updated 3 months ago
- ☆241Nov 24, 2025Updated 3 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,752Jul 18, 2025Updated 8 months ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,119Aug 26, 2025Updated 6 months ago
- For optimization algorithm research and development.☆561Mar 3, 2026Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆282Nov 24, 2025Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆201Jun 1, 2025Updated 9 months ago
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- Source code of our paper "PairDistill: Pairwise Relevance Distillation for Dense Retrieval", EMNLP 2024 Main.☆22Nov 28, 2024Updated last year
- What would you do with 1000 H100s...☆1,161Jan 10, 2024Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- ☆15Mar 2, 2025Updated last year
- Machine Learning Engineering Open Book☆17,440Mar 16, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- NanoGPT (124M) in 2 minutes☆4,848Mar 17, 2026Updated last week
- PyTorch native post-training library☆5,707Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- PyTorch native quantization and sparsity for training and inference☆2,739Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- Score-based Diffusion models in JAX.☆18Dec 29, 2025Updated 2 months ago
- ☆92Jul 5, 2024Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆190Jan 19, 2026Updated 2 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Sep 17, 2025Updated 6 months ago
- Efficient Deep Learning Systems course materials (HSE, YSDA)☆969Mar 14, 2026Updated last week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,605Dec 20, 2025Updated 3 months ago
- DreamSmooth: Improving Model-Based RL with Reward Smoothing (ICLR 2024)☆12May 6, 2024Updated last year
- Exploring Applications of GRPO☆252Aug 25, 2025Updated 6 months ago
- ☆23Jan 5, 2025Updated last year
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆55Feb 22, 2025Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Jul 23, 2025Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Oct 18, 2025Updated 5 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 7 months ago
- YaFSDP: Yet another Fully Sharded Data Parallel☆984Mar 13, 2026Updated last week
- Puzzles for learning Triton☆2,336Updated this week
- Experimental CUDA kernel framework unifying typed dimensions, NVRTC JIT specialization, and ML‑guided tuning.☆46Feb 9, 2026Updated last month
- YSDA course in Speech Processing.☆319Mar 13, 2026Updated last week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆603Oct 7, 2025Updated 5 months ago