casys-kaist / CoVA
Official code repository for "CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics [USENIX ATC 22]"
☆16Updated 7 months ago
Alternatives and similar repositories for CoVA:
Users that are interested in CoVA are comparing it to the libraries listed below
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- ☆24Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆105Updated 3 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 3 years ago
- ☆45Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 3 years ago
- ☆21Updated last year
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ☆48Updated 4 months ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 4 years ago
- ☆21Updated 2 years ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆29Updated last year
- MobiSys#114☆21Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with dat…☆26Updated 2 years ago
- ☆28Updated 2 years ago
- ☆14Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆40Updated 5 months ago
- ☆23Updated 2 years ago
- LLM serving cluster simulator☆99Updated last year
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated 11 months ago
- Source code for Jellyfish, a soft real-time inference serving system☆12Updated 2 years ago
- ☆56Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- ☆36Updated 2 weeks ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆112Updated 2 months ago