nvidia-china-sae / WholeGraphLinks
☆11Updated 4 years ago
Alternatives and similar repositories for WholeGraph
Users that are interested in WholeGraph are comparing it to the libraries listed below
Sorting:
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Updated 2 years ago
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆59Updated 3 years ago
- ☆70Updated 4 years ago
- ☆12Updated 3 years ago
- ☆47Updated 3 years ago
- [MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node …☆56Updated 2 years ago
- WholeGraph - large scale Graph Neural Networks☆106Updated last year
- The official SALIENT system described in the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and P…☆40Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆122Updated last year
- Largest realworld open-source graph dataset - Worked done under IBM-Illinois Discovery Accelerator Institute and Amazon Research Awards a…☆85Updated 5 months ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆302Updated 2 years ago
- ☆112Updated 4 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- A GPU-accelerated graph learning library for PyTorch, facilitating the scaling of GNN training and inference.☆146Updated 2 months ago
- Graph Sampling using GPU☆52Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆68Updated 8 months ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆40Updated 4 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- ☆77Updated 4 years ago
- [Mlsys'22] Understanding gnn computational graph: A coordinated computation, io, and memory perspective☆22Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆328Updated 4 months ago
- Set of datasets for the deep learning recommendation model (DLRM).☆48Updated 2 years ago
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆51Updated 2 years ago
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆23Updated 3 years ago
- ☆49Updated 8 months ago
- Distributed Multi-GPU GNN Framework☆36Updated 5 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆33Updated 2 years ago
- ICLR 2021☆48Updated 4 years ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆69Updated 2 years ago