SJTU-IPADS / fgnn-artifacts
FGNN's artifact evaluation (EuroSys 2022)
☆17Updated 3 years ago
Alternatives and similar repositories for fgnn-artifacts:
Users that are interested in fgnn-artifacts are comparing it to the libraries listed below
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆50Updated last year
- ☆27Updated 8 months ago
- Graph Sampling using GPU☆52Updated 3 years ago
- ☆22Updated last year
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆61Updated 2 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Updated last year
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆77Updated 3 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆66Updated 2 years ago
- Distributed Multi-GPU GNN Framework☆37Updated 4 years ago
- ☆32Updated 10 months ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆39Updated 9 months ago
- ☆46Updated 2 years ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- A Framework for Graph Sampling and Random Walk on GPUs.☆39Updated 3 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- FlashMob is a shared-memory random walk system.☆32Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- ☆14Updated 4 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆32Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated 11 months ago
- ☆30Updated last year
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆52Updated 2 years ago
- ☆18Updated 4 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- Vector search with bounded performance.☆34Updated last year
- My paper/code reading notes in Chinese☆46Updated 11 months ago
- ☆36Updated last year
- ☆8Updated 2 years ago
- ☆33Updated 10 months ago