K-Wu / pytorch-direct
Code for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated write-up (https://arxiv.org/abs/2101.07956) explains engineering details, but only a portion of the functionality is migrated to this newer PyTorch version 1.8.0nightly (e152ca5).
☆9Updated last year
Alternatives and similar repositories for pytorch-direct:
Users that are interested in pytorch-direct are comparing it to the libraries listed below
- An Attention Superoptimizer☆21Updated 2 months ago
- A memory profiler for NVIDIA GPUs to explore memory inefficiencies in GPU-accelerated applications.☆25Updated 5 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆74Updated this week
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆14Updated 4 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆23Updated last month
- PyTorch-Direct code on top of PyTorch-1.8.0nightly (e152ca5) for Large Graph Convolutional Network Training with GPU-Oriented Data Commun…☆44Updated last year
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆24Updated 3 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆21Updated last month
- ☆39Updated 5 years ago
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- ☆19Updated 6 months ago
- ☆23Updated 4 months ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Updated 5 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated 8 months ago
- ☆26Updated 2 weeks ago
- GPTQ inference TVM kernel☆38Updated 11 months ago
- ☆11Updated 3 years ago
- ☆30Updated 2 years ago
- Benchmark PyTorch Custom Operators☆14Updated last year
- ☆26Updated this week
- CUDA 12.2 HMM demos☆19Updated 8 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated this week
- ☆22Updated 2 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated last year
- GPU Performance Advisor☆64Updated 2 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 3 months ago