A baseline repository of Auto-Parallelism in Training Neural Networks
☆147Jun 25, 2022Updated 3 years ago
Alternatives and similar repositories for awesome-Auto-Parallelism
Users that are interested in awesome-Auto-Parallelism are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- ☆84Feb 11, 2026Updated 2 months ago
- A curated list of awesome projects and papers for distributed training or inference☆271Oct 8, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆12Apr 30, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- paper and its code for AI System☆357Feb 10, 2026Updated 2 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆81Nov 19, 2024Updated last year
- A schedule language for large model training☆152Aug 21, 2025Updated 7 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆239Sep 24, 2023Updated 2 years ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,004Mar 3, 2026Updated last month
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,403Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- An IR for efficiently simulating distributed ML computation.☆33Jan 13, 2024Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆97Apr 22, 2023Updated 2 years ago
- Yat another MySQL storage engine, a database course project.☆13Dec 23, 2022Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated 2 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 11 months ago
- ☆14Jan 12, 2022Updated 4 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Mar 15, 2024Updated 2 years ago
- Reference code for https://arxiv.org/abs/1906.08879☆18Oct 25, 2019Updated 6 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 3 years ago
- Pipeline Parallelism for PyTorch☆786Aug 21, 2024Updated last year
- ☆41Oct 12, 2020Updated 5 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- ☆325Jan 22, 2024Updated 2 years ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Distributed Communication-Optimal Matrix-Matrix Multiplication Algorithm☆213Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,284Aug 28, 2025Updated 7 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆664Jan 15, 2026Updated 2 months ago
- Artifacts for our SIGCOMM'22 paper Muri☆43Dec 29, 2023Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆335Dec 13, 2025Updated 4 months ago
- ☆30Sep 4, 2023Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,734Oct 19, 2024Updated last year
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,872Updated this week