uclasystem / dorylus
Dorylus: Affordable, Scalable, and Accurate GNN Training
☆78Updated 3 years ago
Alternatives and similar repositories for dorylus:
Users that are interested in dorylus are comparing it to the libraries listed below
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- ☆23Updated last year
- Distributed Multi-GPU GNN Framework☆37Updated 4 years ago
- FGNN's artifact evaluation (EuroSys 2022)☆17Updated 2 years ago
- ☆31Updated 9 months ago
- ☆53Updated 4 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆65Updated 2 years ago
- Graph Sampling using GPU☆51Updated 3 years ago
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆24Updated 3 years ago
- ☆14Updated 4 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- ☆27Updated 7 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 10 months ago
- ☆18Updated 4 years ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆37Updated 8 months ago
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆50Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- FlashMob is a shared-memory random walk system.☆32Updated last year
- Compiler for Dynamic Neural Networks☆45Updated last year
- ☆43Updated 3 years ago
- ☆31Updated last year
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆31Updated 2 years ago
- ☆8Updated 2 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- ☆49Updated 2 years ago
- ☆32Updated 9 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- ☆37Updated 3 years ago