K-Wu / pytorch-direct
Code for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated write-up (https://arxiv.org/abs/2101.07956) explains engineering details, but only a portion of the functionality is migrated to this newer PyTorch version 1.8.0nightly (e152ca5).
☆9Updated last year
Alternatives and similar repositories for pytorch-direct:
Users that are interested in pytorch-direct are comparing it to the libraries listed below
- TileFusion is a highly efficient kernel template library designed to elevate the level of abstraction in CUDA C for processing tiles.☆55Updated this week
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆19Updated last week
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated 11 months ago
- ☆19Updated 4 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 11 months ago
- An Attention Superoptimizer☆21Updated last month
- ☆23Updated 2 months ago
- A memory profiler for NVIDIA GPUs to explore memory inefficiencies in GPU-accelerated applications.☆25Updated 4 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆14Updated 4 years ago
- ☆23Updated last month
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆64Updated 2 years ago
- PyTorch-Direct code on top of PyTorch-1.8.0nightly (e152ca5) for Large Graph Convolutional Network Training with GPU-Oriented Data Commun…☆45Updated last year
- Artifacts for SOSP'19 paper Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions☆21Updated 2 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Updated 6 months ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Updated 5 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Updated 3 years ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆17Updated this week
- ☆38Updated 4 years ago
- Thunder Research Group's Collective Communication Library☆33Updated 9 months ago
- ☆8Updated last year
- Benchmark PyTorch Custom Operators☆13Updated last year
- ☆11Updated 3 years ago
- Artifacts of EVT ASPLOS'24☆23Updated 11 months ago
- Benchmark scripts for TVM☆73Updated 2 years ago
- CUDA 12.2 HMM demos☆19Updated 6 months ago
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- An IR for efficiently simulating distributed ML computation.☆27Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆19Updated last year