guessmewho233 / CoGNN_info_for_SC22
☆8Updated 2 years ago
Alternatives and similar repositories for CoGNN_info_for_SC22:
Users that are interested in CoGNN_info_for_SC22 are comparing it to the libraries listed below
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated 11 months ago
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆50Updated last year
- ☆11Updated 2 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆65Updated 2 years ago
- ☆23Updated last year
- ☆27Updated 7 months ago
- Distributed Multi-GPU GNN Framework☆37Updated 4 years ago
- ☆31Updated 9 months ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆27Updated 3 years ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆37Updated 8 months ago
- ☆36Updated last year
- SC'22 Artifacts Evaluation☆9Updated 2 years ago
- ☆37Updated 3 years ago
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- GPU-initiated Large-scale GNN System [ATC 23]☆18Updated 4 months ago
- Graph Sampling using GPU☆51Updated 2 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 10 months ago
- ☆9Updated 3 years ago
- FGNN's artifact evaluation (EuroSys 2022)☆17Updated 2 years ago
- Compiler for Dynamic Neural Networks☆45Updated last year
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆24Updated 3 years ago
- ☆23Updated 2 years ago
- ☆31Updated last year
- ☆49Updated 2 years ago
- ☆14Updated 4 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆10Updated last year
- ☆46Updated 2 months ago