casys-kaist / Edge-schedulerLinks
☆14Updated 4 years ago
Alternatives and similar repositories for Edge-scheduler
Users that are interested in Edge-scheduler are comparing it to the libraries listed below
Sorting:
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆26Updated 4 years ago
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- ☆19Updated 3 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆84Updated 5 years ago
- ☆78Updated 2 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆25Updated 2 years ago
- Experimental deep learning framework written in Rust☆15Updated 2 years ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 5 years ago
- Cache design for CNN on mobile☆34Updated 7 years ago
- Model-less Inference Serving☆92Updated last year
- Multi-Instance-GPU profiling tool☆60Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆208Updated last year
- ☆86Updated last month
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆359Updated last year
- ☆15Updated 2 years ago
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆15Updated 5 years ago
- ☆31Updated 2 years ago
- MobiSys#114☆22Updated 2 years ago
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆42Updated 7 years ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆35Updated last year
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
- ☆38Updated 3 months ago
- A tool for examining GPU scheduling behavior.☆88Updated last year
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- Study Group of Deep Learning Compiler☆164Updated 2 years ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆108Updated 3 years ago
- Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving☆37Updated 5 years ago