☆84Feb 11, 2026Updated 2 months ago
Alternatives and similar repositories for Merak
Users that are interested in Merak are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆71Mar 20, 2025Updated last year
- A resilient distributed training framework☆99Apr 11, 2024Updated 2 years ago
- ☆12Apr 30, 2024Updated 2 years ago
- ☆78May 4, 2021Updated 4 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Jun 25, 2022Updated 3 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- Dynamic resources changes for multi-dimensional parallelism training☆31Aug 22, 2025Updated 8 months ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Jan 8, 2022Updated 4 years ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- ☆14Jan 12, 2022Updated 4 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- nnScaler: Compiling DNN models for Parallel Training☆129Apr 8, 2026Updated 3 weeks ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 5 months ago
- ☆27Aug 31, 2023Updated 2 years ago
- Work in progress LLM framework.☆15Oct 31, 2024Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Dec 11, 2020Updated 5 years ago
- ☆38Jan 15, 2021Updated 5 years ago
- An experimental parallel training platform☆57Mar 25, 2024Updated 2 years ago
- A library to analyze PyTorch traces.☆510Apr 22, 2026Updated last week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆335Dec 13, 2025Updated 4 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52May 31, 2023Updated 2 years ago
- Training and serving large-scale neural networks with auto parallelization.☆3,186Dec 9, 2023Updated 2 years ago
- ☆17Dec 9, 2022Updated 3 years ago
- A schedule language for large model training☆152Aug 21, 2025Updated 8 months ago
- HeliosArtifact☆22Sep 27, 2022Updated 3 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Fine-grained GPU sharing primitives☆147Jul 28, 2025Updated 9 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A curated list of awesome projects and papers for distributed training or inference☆274Oct 8, 2024Updated last year
- ☆392Nov 4, 2022Updated 3 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,873Apr 23, 2026Updated last week
- ☆84Dec 2, 2022Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- Official implementation of TBA for async LLM post-training.☆30Nov 5, 2025Updated 5 months ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago