zhisbug / CavsLinks
Cavs: An Efficient Runtime System for Dynamic Neural Networks
☆14Updated 4 years ago
Alternatives and similar repositories for Cavs
Users that are interested in Cavs are comparing it to the libraries listed below
Sorting:
- An Attention Superoptimizer☆22Updated 5 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆27Updated 5 months ago
- ☆43Updated last year
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 2 months ago
- Complete GPU residency for ML.☆31Updated last week
- Artifacts of EVT ASPLOS'24☆26Updated last year
- ☆23Updated 7 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- ☆25Updated last year
- ☆49Updated last month
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆50Updated 11 months ago
- ☆16Updated 2 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆37Updated 3 months ago
- An experimental parallel training platform☆54Updated last year
- ☆79Updated 2 years ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- A schedule language for large model training☆149Updated last year
- DeeperGEMM: crazy optimized version☆69Updated 2 months ago
- ☆9Updated last year
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆60Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 7 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆52Updated last year
- Microsoft Collective Communication Library☆64Updated 7 months ago
- GPU Performance Advisor☆65Updated 2 years ago
- Mille Crepe Bench: layer-wise performance analysis for deep learning frameworks.☆17Updated 5 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago