☆251Jul 25, 2024Updated last year
Alternatives and similar repositories for varuna
Users that are interested in varuna are comparing it to the libraries listed below
Sorting:
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- OSLO: Open Source framework for Large-scale model Optimization☆309Aug 25, 2022Updated 3 years ago
- ☆22Apr 22, 2024Updated last year
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated 2 months ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- ☆56Jan 25, 2021Updated 5 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆453Mar 5, 2023Updated 3 years ago
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Jul 21, 2021Updated 4 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- A GPipe implementation in PyTorch☆862Jul 25, 2024Updated last year
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆47Nov 24, 2022Updated 3 years ago
- Microsoft Collective Communication Library☆387Sep 20, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,403Apr 26, 2025Updated 10 months ago
- Fine-grained GPU sharing primitives☆147Jul 28, 2025Updated 7 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- ☆78May 4, 2021Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆199Aug 31, 2019Updated 6 years ago
- Memory-efficient transformer. Work in progress.☆19Sep 17, 2022Updated 3 years ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆411Jun 14, 2025Updated 9 months ago
- ☆19Sep 20, 2022Updated 3 years ago
- Simple repository contribution statistics☆15Mar 6, 2026Updated 2 weeks ago
- Research and development for optimizing transformers☆131Feb 16, 2021Updated 5 years ago
- Bagua Speeds up PyTorch☆884Aug 1, 2024Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆145Jan 30, 2025Updated last year
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- Depict GPU memory footprint during DNN training of PyTorch☆11Nov 17, 2022Updated 3 years ago
- ☆15Apr 20, 2022Updated 3 years ago
- ☆94Jul 3, 2022Updated 3 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,078Apr 17, 2024Updated last year
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year