rapidsai / wholegraphLinks
WholeGraph - large scale Graph Neural Networks
☆106Updated last year
Alternatives and similar repositories for wholegraph
Users that are interested in wholegraph are comparing it to the libraries listed below
Sorting:
- ☆70Updated 4 years ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆301Updated 2 years ago
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Updated 2 years ago
- Set of datasets for the deep learning recommendation model (DLRM).☆48Updated 3 years ago
- Large scale graph learning on a single machine.☆167Updated 11 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Updated last year
- The official SALIENT system described in the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and P…☆40Updated 2 years ago
- ☆112Updated 4 years ago
- distributed-embeddings is a library for building large embedding based models in Tensorflow 2.☆46Updated 2 years ago
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆23Updated 3 years ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 7 months ago
- Samples demonstrating how to use the Compute Sanitizer Tools and Public API☆93Updated 2 years ago
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆59Updated 3 years ago
- oneCCL Bindings for Pytorch* (deprecated)☆104Updated last month
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆155Updated last week
- A GPU-accelerated graph learning library for PyTorch, facilitating the scaling of GNN training and inference.☆147Updated 4 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Updated 2 weeks ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- A schedule language for large model training☆152Updated 5 months ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆459Updated 3 weeks ago
- ☆26Updated 11 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated this week
- [MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node …☆56Updated 2 years ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆189Updated 2 months ago
- Distributed Multi-GPU GNN Framework☆36Updated 5 years ago
- Microsoft Collective Communication Library☆66Updated last year
- An experimental parallel training platform☆56Updated last year
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆63Updated 3 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76Updated 4 years ago