YukeWang96 / MGG_OSDI23
Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Multi-GPU Platforms.
☆37Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for MGG_OSDI23
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆27Updated 2 years ago
- ☆8Updated 2 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆63Updated last year
- Distributed Multi-GPU GNN Framework☆36Updated 4 years ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆39Updated 3 years ago
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆60Updated 2 years ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆36Updated 4 months ago
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆24Updated 2 years ago
- ☆26Updated 5 months ago
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆48Updated last year
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆45Updated last year
- ☆101Updated 3 years ago
- ☆10Updated last year
- ☆15Updated last year
- ☆27Updated 3 months ago
- ☆24Updated 5 months ago
- ☆10Updated 11 months ago
- ☆23Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆43Updated 5 months ago
- ☆73Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆25Updated 9 months ago
- ☆9Updated 2 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆31Updated last year
- Source code of the SC '23 paper: "DASP: Specific Dense Matrix Multiply-Accumulate Units Accelerated General Sparse Matrix-Vector Multipli…☆18Updated 5 months ago
- ☆44Updated 5 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆81Updated last year
- ☆21Updated last year
- ☆32Updated last year
- Compiler for Dynamic Neural Networks☆43Updated last year