Hsword / HetuView external linksLinks
A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, please visit/star/fork https://github.com/PKU-DAIR/Hetu
☆124Dec 18, 2023Updated 2 years ago
Alternatives and similar repositories for Hetu
Users that are interested in Hetu are comparing it to the libraries listed below
Sorting:
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Dec 13, 2025Updated 2 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Oct 22, 2025Updated 3 months ago
- Research paper list for host networking: in a system view☆10Jan 2, 2025Updated last year
- ☆23Jan 7, 2022Updated 4 years ago
- Binary Neural Network-based COVID-19 Face-Mask Wear and Positioning Predictor on Edge Devices☆12Jul 1, 2021Updated 4 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆56Jul 3, 2022Updated 3 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Nov 21, 2024Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated 10 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56May 29, 2024Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)☆157May 10, 2024Updated last year
- ☆19Aug 26, 2021Updated 4 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆19Oct 31, 2022Updated 3 years ago
- paper and its code for AI System☆347Updated this week
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76May 31, 2021Updated 4 years ago
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆23May 9, 2024Updated last year
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆40Nov 16, 2021Updated 4 years ago
- SOTA Learning-augmented Systems☆37May 21, 2022Updated 3 years ago
- KuaiSearch PERKS☆12Nov 16, 2021Updated 4 years ago
- ☆89Apr 2, 2022Updated 3 years ago
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs).☆176Jan 19, 2026Updated 3 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆124Sep 23, 2025Updated 4 months ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆301Aug 17, 2023Updated 2 years ago
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆963Dec 21, 2025Updated last month
- An MLIR-based compiler from C/C++ to AMD-Xilinx Versal AIE☆18Aug 5, 2022Updated 3 years ago
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- An efficient storage system for concurrent graph processing☆10Feb 1, 2021Updated 5 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- read source code of boltdb & re-implement it in c++☆12Jun 2, 2018Updated 7 years ago
- A unified programming framework for high and portable performance across FPGAs and GPUs☆11Mar 23, 2025Updated 10 months ago
- A Framework for Graph Sampling and Random Walk on GPUs.☆38Feb 3, 2025Updated last year
- ☆11Aug 4, 2020Updated 5 years ago
- Training and serving large-scale neural networks with auto parallelization.☆3,180Dec 9, 2023Updated 2 years ago