casys-kaist / Edge-scheduler
☆14Updated 3 years ago
Alternatives and similar repositories for Edge-scheduler:
Users that are interested in Edge-scheduler are comparing it to the libraries listed below
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 3 years ago
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 4 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆83Updated 5 years ago
- ☆19Updated 3 years ago
- ☆77Updated last year
- Model-less Inference Serving☆88Updated last year
- Multi-Instance-GPU profiling tool☆57Updated 2 years ago
- ☆48Updated 4 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆13Updated 3 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- ☆49Updated 2 years ago
- ☆19Updated 2 years ago
- ☆37Updated 3 years ago
- MobiSys#114☆21Updated last year
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- BATCH: Adaptive Batching for Efficient MachineLearning Serving on Serverless Platforms☆9Updated 3 years ago
- ☆24Updated last year
- A tool for examining GPU scheduling behavior.☆81Updated 8 months ago
- Cache design for CNN on mobile☆32Updated 6 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆112Updated 2 months ago
- ☆66Updated last month
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆32Updated 9 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- LLM serving cluster simulator☆99Updated last year