casys-kaist / CoVALinks
Official code repository for "CoVA: Exploiting Compressed-Domain Analysis to Accelerate Video Analytics [USENIX ATC 22]"
☆18Updated last year
Alternatives and similar repositories for CoVA
Users that are interested in CoVA are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆99Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆112Updated 3 years ago
- MobiSys#114☆22Updated 2 years ago
- ☆45Updated 2 years ago
- ☆209Updated last year
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 3 months ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 5 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆78Updated 2 years ago
- ☆11Updated 4 years ago
- Experimental deep learning framework written in Rust☆15Updated 3 years ago
- Model-less Inference Serving☆92Updated 2 years ago
- ☆20Updated 2 years ago
- ☆101Updated last year
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 4 years ago
- ☆25Updated 2 years ago
- ☆15Updated 2 years ago
- FilterForward: Scaling Video Analytics on Constrained Edge Nodes☆28Updated 5 years ago
- Multi-Instance-GPU profiling tool☆60Updated 2 years ago
- ☆90Updated 3 weeks ago
- ☆22Updated last year
- PacketGame: Multi-Stream Packet Gating for Concurrent Video Inference at Scale☆14Updated 2 years ago
- ☆58Updated 3 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆34Updated 2 months ago
- ☆14Updated 4 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year