caoting-dotcom / multiBranchModel
Multi-branch model for concurrent execution
☆17Updated last year
Alternatives and similar repositories for multiBranchModel:
Users that are interested in multiBranchModel are comparing it to the libraries listed below
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 7 months ago
- ☆77Updated last year
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆49Updated 10 months ago
- ☆14Updated 3 years ago
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- play gemm with tvm☆90Updated last year
- ☆37Updated 3 years ago
- An Optimizing Compiler for Recommendation Model Inference☆23Updated last year
- ☆28Updated 9 months ago
- MobiSys#114☆21Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- Artifacts of EVT ASPLOS'24☆23Updated last year
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated 11 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆37Updated 4 months ago
- DietCode Code Release☆62Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆119Updated 2 years ago
- ☆38Updated 5 years ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆13Updated 6 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆10Updated last year
- study of Ampere' Sparse Matmul☆18Updated 4 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆107Updated 2 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆41Updated 3 weeks ago
- ☆26Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆85Updated 2 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆33Updated 3 weeks ago
- Compiler for Dynamic Neural Networks☆45Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 11 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆25Updated 2 months ago