A high-performance distributed deep learning system targeting large-scale and automated distributed training.
☆335Dec 13, 2025Updated 3 months ago
Alternatives and similar repositories for Hetu
Users that are interested in Hetu are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture sear…☆64Nov 11, 2025Updated 4 months ago
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Oct 22, 2025Updated 5 months ago
- Towards Generalized and Efficient Blackbox Optimization System/Package (KDD 2021 & JMLR 2024)☆433Updated this week
- A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)☆157May 10, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- An experimental parallel training platform☆56Mar 25, 2024Updated 2 years ago
- A customized and efficient database tuning system [VLDB'22]☆37Sep 21, 2023Updated 2 years ago
- DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation☆12Apr 16, 2024Updated last year
- Generalized and Efficient Blackbox Optimization System.☆85Feb 21, 2023Updated 3 years ago
- ☆14Jan 12, 2022Updated 4 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 10 months ago
- ☆89Apr 2, 2022Updated 3 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- ☆20Oct 31, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52May 31, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆981Updated this week
- Efficient Hyper-parameter Tuning at Scale (VLDB'22)☆10Dec 1, 2021Updated 4 years ago
- ☆80Mar 7, 2022Updated 4 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆653Jan 15, 2026Updated 2 months ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Sep 21, 2023Updated 2 years ago
- ☆84Feb 11, 2026Updated last month
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated last year
- ☆392Nov 4, 2022Updated 3 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- ☆12Apr 30, 2024Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 7 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆419Aug 21, 2025Updated 7 months ago
- THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆19Jul 30, 2024Updated last year
- ☆220Aug 17, 2023Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,869Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture sear…☆63Jan 26, 2022Updated 4 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,398Mar 11, 2026Updated 2 weeks ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 3 weeks ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 7 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Apr 22, 2023Updated 2 years ago
- ☆84Dec 2, 2022Updated 3 years ago