caoting-dotcom / multiBranchModel
Multi-branch model for concurrent execution
☆16Updated last year
Alternatives and similar repositories for multiBranchModel:
Users that are interested in multiBranchModel are comparing it to the libraries listed below
- ☆74Updated last year
- This is a list of awesome edgeAI inference related papers.☆91Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 4 months ago
- ☆23Updated 2 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- ☆37Updated 3 years ago
- play gemm with tvm☆85Updated last year
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆9Updated 10 months ago
- MobiSys#114☆21Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated 8 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆53Updated 8 months ago
- SOTA Learning-augmented Systems☆34Updated 2 years ago
- LLM Inference analyzer for different hardware platforms☆47Updated last month
- ☆38Updated 4 years ago
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆30Updated 5 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆49Updated 7 months ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆20Updated 4 years ago
- ☆14Updated 2 years ago
- ☆14Updated 11 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆40Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆87Updated 10 months ago
- ☆15Updated 5 years ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.☆41Updated last year
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆117Updated 2 years ago
- An Optimizing Compiler for Recommendation Model Inference☆22Updated 11 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆18Updated 3 years ago
- study of Ampere' Sparse Matmul☆16Updated 4 years ago
- ☆19Updated 3 months ago
- Triton Compiler related materials.☆29Updated 2 weeks ago