casys-kaist / CoVALinks
Official code repository for "CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics [USENIX ATC 22]"
☆17Updated last year
Alternatives and similar repositories for CoVA
Users that are interested in CoVA are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆107Updated 3 years ago
- ☆46Updated 2 years ago
- ☆208Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆20Updated 2 years ago
- Experimental deep learning framework written in Rust☆15Updated 2 years ago
- MobiSys#114☆22Updated 2 years ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 3 years ago
- ☆100Updated last year
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last month
- ☆11Updated 4 years ago
- ☆78Updated 2 years ago
- ☆83Updated last month
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆26Updated 4 years ago
- ☆25Updated 2 years ago
- ☆58Updated 3 years ago
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- ☆21Updated last year
- ☆30Updated 2 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- PacketGame: Multi-Stream Packet Gating for Concurrent Video Inference at Scale☆12Updated 2 years ago
- Model-less Inference Serving☆92Updated last year
- FilterForward: Scaling Video Analytics on Constrained Edge Nodes☆28Updated 5 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆140Updated 2 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆25Updated 2 years ago
- ☆24Updated 3 years ago