UofT-EcoSystem / BPPSA-openLinks
The (open-source part of) code to reproduce "BPPSA: Scaling Back-propagation by Parallel Scan Algorithm".
☆12Updated 4 years ago
Alternatives and similar repositories for BPPSA-open
Users that are interested in BPPSA-open are comparing it to the libraries listed below
Sorting:
- Cavs: An Efficient Runtime System for Dynamic Neural Networks☆14Updated 4 years ago
- ☆20Updated 3 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 2 months ago
- ☆23Updated 8 months ago
- ☆44Updated 3 years ago
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆59Updated 2 years ago
- Code released to accompany the ISCA paper: "T4: Compiling Sequential Code for Effective Speculative Parallelization in Hardware"☆29Updated 3 years ago
- An Attention Superoptimizer☆22Updated 6 months ago
- ☆80Updated 2 years ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆18Updated 2 years ago
- ☆47Updated 2 years ago
- ☆14Updated 3 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆31Updated 5 months ago
- ☆27Updated 5 years ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆28Updated 7 months ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated last year
- ☆13Updated 3 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆38Updated 4 months ago
- An IR for efficiently simulating distributed ML computation.☆29Updated last year
- Benchmark PyTorch Custom Operators☆14Updated 2 years ago
- GPU Performance Advisor☆65Updated 3 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆59Updated 3 years ago
- Sparse kernels for GNNs based on TVM☆17Updated 4 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 8 months ago
- ☆43Updated last year
- ☆18Updated last month
- ☆14Updated last year
- Thinking is hard - automate it☆19Updated 2 years ago
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆24Updated 4 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago