☆251Jul 25, 2024Updated last year
Alternatives and similar repositories for varuna
Users that are interested in varuna are comparing it to the libraries listed below
Sorting:
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- OSLO: Open Source framework for Large-scale model Optimization☆309Aug 25, 2022Updated 3 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated last month
- ☆22Apr 22, 2024Updated last year
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Jul 21, 2021Updated 4 years ago
- ☆56Jan 25, 2021Updated 5 years ago
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆454Mar 5, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- A GPipe implementation in PyTorch☆863Jul 25, 2024Updated last year
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 5 years ago
- Memory-efficient transformer. Work in progress.☆19Sep 17, 2022Updated 3 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- ☆145Jan 30, 2025Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Jan 9, 2023Updated 3 years ago
- Training and serving large-scale neural networks with auto parallelization.☆3,183Dec 9, 2023Updated 2 years ago
- Fine-grained GPU sharing primitives☆148Jul 28, 2025Updated 7 months ago
- ☆78May 4, 2021Updated 4 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- Microsoft Collective Communication Library☆384Sep 20, 2023Updated 2 years ago
- Bagua Speeds up PyTorch☆884Aug 1, 2024Updated last year
- Simple repository contribution statistics☆15Jan 20, 2026Updated last month
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆411Jun 14, 2025Updated 8 months ago
- ☆19Sep 20, 2022Updated 3 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,075Apr 17, 2024Updated last year
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆417Updated this week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,861Feb 20, 2026Updated last week
- ☆198Aug 31, 2019Updated 6 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 11 months ago
- ☆44Sep 6, 2021Updated 4 years ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆149Dec 11, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆965Dec 21, 2025Updated 2 months ago
- ☆26Aug 31, 2023Updated 2 years ago