casys-kaist / CoVA
☆14Updated 2 years ago
Related projects: ⓘ
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆97Updated 2 years ago
- ☆45Updated last year
- This is a list of awesome edgeAI inference related papers.☆84Updated 9 months ago
- ☆20Updated last year
- ☆18Updated 8 months ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 2 years ago
- ☆28Updated last year
- MobiSys#114☆21Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆19Updated 3 years ago
- ☆56Updated 2 years ago
- Source code for Jellyfish, a soft real-time inference serving system☆12Updated last year
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 4 years ago
- PacketGame: Multi-Stream Packet Gating for Concurrent Video Inference at Scale☆11Updated last year
- FilterForward: Scaling Video Analytics on Constrained Edge Nodes☆28Updated 4 years ago
- Experimental deep learning framework written in Rust☆13Updated last year
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆19Updated 3 years ago
- To deploy Transformer models in CV to mobile devices.☆17Updated 2 years ago
- Server-driven Video Streaming for Deep Learning Inference☆84Updated 2 years ago
- ☆19Updated last year
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆51Updated 5 months ago
- ☆40Updated last year
- A curated list of early exiting☆24Updated last month
- Multi-Instance-GPU profiling tool☆51Updated last year
- Adaptive Video Streaming with Layered Neural Codecs☆38Updated 2 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆54Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆39Updated 2 years ago
- ☆74Updated last year
- ☆37Updated 2 years ago
- Model-less Inference Serving☆78Updated 10 months ago