dgSPARSE / dgNN
[Mlsys'22] Understanding gnn computational graph: A coordinated computation, io, and memory perspective
☆17Updated last year
Alternatives and similar repositories for dgNN:
Users that are interested in dgNN are comparing it to the libraries listed below
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆38Updated 10 months ago
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆45Updated last year
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆27Updated 2 years ago
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆61Updated 2 years ago
- ☆104Updated 3 years ago
- ☆9Updated 2 years ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆39Updated 3 years ago
- [HPCA 2022] GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design☆35Updated 2 years ago
- Workload-Aware Co-Optimization☆8Updated last year
- ☆14Updated 2 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆64Updated last year
- ☆26Updated 7 months ago
- The official code for DATE'23 paper <CLAP: Locality Aware and Parallel Triangle Counting with Content Addressable Memory>☆21Updated 3 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆85Updated 2 years ago
- ☆8Updated 2 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆31Updated last year
- Source code of the SC '23 paper: "DASP: Specific Dense Matrix Multiply-Accumulate Units Accelerated General Sparse Matrix-Vector Multipli…☆24Updated 7 months ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆36Updated 6 months ago
- Repository for artifact evaluation of ASPLOS 2023 paper "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"☆24Updated last year
- PyTorch-Based Fast and Efficient Processing for Various Machine Learning Applications with Diverse Sparsity☆100Updated last week
- ☆32Updated 2 years ago
- Mirror of http://gitlab.hpcrl.cse.ohio-state.edu/chong/ppopp19_ae, refactoring for understanding☆14Updated 3 years ago
- ☆27Updated 5 months ago
- Repo for the IISWC 2018 submission☆9Updated 2 years ago
- Implementation of FusedMM method for IPDPS 2021 paper titled "FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural N…☆30Updated 2 years ago
- Distributed Multi-GPU GNN Framework☆36Updated 4 years ago
- ☆21Updated last year
- ☆73Updated 3 years ago
- ☆10Updated last year
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆50Updated last year