A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, please visit/star/fork https://github.com/PKU-DAIR/Hetu
☆124Dec 18, 2023Updated 2 years ago
Alternatives and similar repositories for Hetu
Users that are interested in Hetu are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆336Dec 13, 2025Updated 3 months ago
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Oct 22, 2025Updated 5 months ago
- ☆20Oct 31, 2022Updated 3 years ago
- Research paper list for host networking: in a system view☆10Jan 2, 2025Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆23Jan 7, 2022Updated 4 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52May 31, 2023Updated 2 years ago
- A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)☆157May 10, 2024Updated last year
- Binary Neural Network-based COVID-19 Face-Mask Wear and Positioning Predictor on Edge Devices☆12Jul 1, 2021Updated 4 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Jul 3, 2022Updated 3 years ago
- ☆13Jan 23, 2021Updated 5 years ago
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆23May 9, 2024Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆57May 29, 2024Updated last year
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆41Nov 16, 2021Updated 4 years ago
- Towards Generalized and Efficient Blackbox Optimization System/Package (KDD 2021 & JMLR 2024)☆433Updated this week
- ☆89Apr 2, 2022Updated 3 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated 11 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆978Mar 6, 2026Updated 2 weeks ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- ☆80Mar 7, 2022Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆11Jun 25, 2021Updated 4 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76May 31, 2021Updated 4 years ago
- paper and its code for AI System☆356Feb 10, 2026Updated last month
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- A Really Scalable RL Framework to 10k+ CPUs☆38Feb 29, 2024Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- SOTA Learning-augmented Systems☆37May 21, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- An experimental parallel training platform☆56Mar 25, 2024Updated 2 years ago
- Microsoft Collective Communication Library☆387Sep 20, 2023Updated 2 years ago
- nnScaler: Compiling DNN models for Parallel Training☆126Sep 23, 2025Updated 6 months ago
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- ☆392Nov 4, 2022Updated 3 years ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆302Aug 17, 2023Updated 2 years ago