casys-kaist / Edge-schedulerLinks
☆14Updated 4 years ago
Alternatives and similar repositories for Edge-scheduler
Users that are interested in Edge-scheduler are comparing it to the libraries listed below
Sorting:
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆26Updated 4 years ago
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- ☆78Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆19Updated 3 years ago
- Experimental deep learning framework written in Rust☆15Updated 3 years ago
- Model-less Inference Serving☆91Updated 2 years ago
- Multi-Instance-GPU profiling tool☆60Updated 2 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆86Updated 5 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆25Updated 2 years ago
- Multi-branch model for concurrent execution☆18Updated 2 years ago
- ☆93Updated last month
- Cache design for CNN on mobile☆33Updated 7 years ago
- MobiSys#114☆22Updated 2 years ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 5 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆360Updated last year
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- ☆33Updated 3 years ago
- ☆211Updated last year
- ☆25Updated 2 years ago
- A tool for examining GPU scheduling behavior.☆89Updated last year
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆56Updated 10 months ago
- ☆38Updated 4 months ago
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆41Updated 8 years ago
- distributed CNN inference at the edge, extend ncnn with CUDA, MPI+OPENMP support.☆21Updated 3 months ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆35Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆56Updated last year
- Fine-grained GPU sharing primitives☆147Updated 3 months ago