SJTU-IPADS / fgnn-artifactsLinks
FGNN's artifact evaluation (EuroSys 2022)
☆17Updated 3 years ago
Alternatives and similar repositories for fgnn-artifacts
Users that are interested in fgnn-artifacts are comparing it to the libraries listed below
Sorting:
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76Updated 4 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆66Updated 2 years ago
- ☆27Updated 10 months ago
- Distributed Multi-GPU GNN Framework☆36Updated 4 years ago
- ☆22Updated last year
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆50Updated 2 years ago
- Graph Sampling using GPU☆52Updated 3 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆39Updated last year
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆60Updated 2 years ago
- ☆45Updated 2 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Updated last year
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆38Updated 11 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year
- A Framework for Graph Sampling and Random Walk on GPUs.☆39Updated 4 months ago
- GPU-initiated Large-scale GNN System [ATC 23]☆18Updated 7 months ago
- ☆33Updated last week
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆23Updated 3 years ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- FlashMob is a shared-memory random walk system.☆32Updated last year
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆33Updated 2 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆54Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- ☆14Updated 4 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 10 months ago
- ☆19Updated 4 years ago
- ☆12Updated last year
- ☆8Updated 3 years ago
- ☆30Updated last year