☆82Feb 11, 2026Updated 2 weeks ago
Alternatives and similar repositories for Merak
Users that are interested in Merak are comparing it to the libraries listed below
Sorting:
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated 11 months ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Jan 8, 2022Updated 4 years ago
- ☆78May 4, 2021Updated 4 years ago
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- Dynamic resources changes for multi-dimensional parallelism training☆30Aug 22, 2025Updated 6 months ago
- ☆12Apr 30, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 5 months ago
- ☆26Aug 31, 2023Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- ☆38Jan 15, 2021Updated 5 years ago
- ☆14Jan 12, 2022Updated 4 years ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 9 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Jan 9, 2023Updated 3 years ago
- A library to analyze PyTorch traces.☆467Feb 4, 2026Updated 3 weeks ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆75Dec 11, 2020Updated 5 years ago
- A schedule language for large model training☆152Aug 21, 2025Updated 6 months ago
- HeliosArtifact☆22Sep 27, 2022Updated 3 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Dec 13, 2025Updated 2 months ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52May 31, 2023Updated 2 years ago
- Work in progress LLM framework.☆15Oct 31, 2024Updated last year
- ☆10Oct 8, 2018Updated 7 years ago
- ☆25Apr 3, 2023Updated 2 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Fine-grained GPU sharing primitives☆148Jul 28, 2025Updated 7 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated last month
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 3 months ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- This repo contains the scripts used to create the data for the ATC2020 paper "Reconstructing proprietary video streaming algorithms"☆14Mar 24, 2021Updated 4 years ago
- ☆392Nov 4, 2022Updated 3 years ago
- ☆27Oct 26, 2019Updated 6 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- ☆12May 3, 2020Updated 5 years ago