K-Wu / pytorch-direct
Code for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated write-up (https://arxiv.org/abs/2101.07956) explains engineering details, but only a portion of the functionality is migrated to this newer PyTorch version 1.8.0nightly (e152ca5).
☆8Updated last year
Related projects: ⓘ
- An Attention Superoptimizer☆19Updated 4 months ago
- CUDA 12.2 HMM demos☆16Updated last month
- An external memory allocator example for PyTorch.☆13Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆61Updated 2 years ago
- ☆14Updated last week
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆17Updated 2 years ago
- ☆11Updated 3 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- ☆16Updated this week
- GPTQ inference TVM kernel☆35Updated 4 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated 6 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆58Updated 6 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆33Updated last year
- Artifacts for SOSP'19 paper Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions☆21Updated 2 years ago
- PyTorch-Direct code on top of PyTorch-1.8.0nightly (e152ca5) for Large Graph Convolutional Network Training with GPU-Oriented Data Commun…☆45Updated last year
- ☆7Updated last year
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆13Updated 4 years ago
- High Performance Grouped GEMM in PyTorch☆20Updated 2 years ago
- ☆35Updated 10 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆93Updated last week
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆9Updated 2 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆26Updated 4 years ago
- ☆9Updated last year
- TensorRT LLM Benchmark Configuration☆10Updated last month
- An IR for efficiently simulating distributed ML computation.☆24Updated 8 months ago
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆44Updated 2 weeks ago
- ☆48Updated 6 months ago
- Benchmark PyTorch Custom Operators☆13Updated last year
- Benchmark scripts for TVM☆73Updated 2 years ago