A baseline repository of Auto-Parallelism in Training Neural Networks
☆146Jun 25, 2022Updated 3 years ago
Alternatives and similar repositories for awesome-Auto-Parallelism
Users that are interested in awesome-Auto-Parallelism are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated 2 years ago
- ☆84Feb 11, 2026Updated 2 months ago
- A curated list of awesome projects and papers for distributed training or inference☆274Oct 8, 2024Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆12Apr 30, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- paper and its code for AI System☆359Feb 10, 2026Updated 2 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- An experimental parallel training platform☆57Mar 25, 2024Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Nov 19, 2024Updated last year
- A schedule language for large model training☆152Aug 21, 2025Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆243Sep 24, 2023Updated 2 years ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,009Mar 3, 2026Updated 2 months ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,420Apr 22, 2026Updated last week
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- An IR for efficiently simulating distributed ML computation.☆33Jan 13, 2024Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆97Apr 22, 2023Updated 3 years ago
- Yat another MySQL storage engine, a database course project.☆13Dec 23, 2022Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated 2 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 11 months ago
- ☆14Jan 12, 2022Updated 4 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Mar 15, 2024Updated 2 years ago
- Reference code for https://arxiv.org/abs/1906.08879☆18Oct 25, 2019Updated 6 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆71Mar 20, 2025Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 3 years ago
- Pipeline Parallelism for PyTorch☆786Aug 21, 2024Updated last year
- ☆41Oct 12, 2020Updated 5 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- ☆328Jan 22, 2024Updated 2 years ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 8 months ago
- Distributed Communication-Optimal Matrix-Matrix Multiplication Algorithm☆213Apr 18, 2026Updated 2 weeks ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,295Aug 28, 2025Updated 8 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- Artifacts for our SIGCOMM'22 paper Muri☆43Dec 29, 2023Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆335Dec 13, 2025Updated 4 months ago
- ☆30Sep 4, 2023Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,741Oct 19, 2024Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆129Jul 13, 2024Updated last year