ztt-21 / zTTLinks
zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation
☆26Updated 4 years ago
Alternatives and similar repositories for zTT
Users that are interested in zTT are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
 - Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆35Updated last year
 - ☆87Updated 3 weeks ago
 - ☆209Updated last year
 - ☆14Updated 4 years ago
 - A version of XRBench-MAESTRO used for MLSys 2023 publication☆25Updated 2 years ago
 - ☆78Updated 2 years ago
 - Experimental deep learning framework written in Rust☆15Updated 3 years ago
 - Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
 - MobiSys#114☆22Updated 2 years ago
 - LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆146Updated 3 months ago
 - This is the proof-of-concept CPU implementation of ASPEN used for the NeurIPS'23 paper ASPEN: Breaking Operator Barriers for Efficient Pa…☆13Updated last year
 - A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆44Updated 2 years ago
 - LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
 - ☆16Updated 2 years ago
 - ☆45Updated 2 years ago
 - Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
 - THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression☆20Updated last year
 - ☆25Updated 2 years ago
 - [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆15Updated 5 years ago
 - Model-less Inference Serving☆92Updated 2 years ago
 - LLM serving cluster simulator☆116Updated last year
 - ☆58Updated 3 years ago
 - Multi-Instance-GPU profiling tool☆60Updated 2 years ago
 - ☆73Updated 5 months ago
 - Cache design for CNN on mobile☆34Updated 7 years ago
 - [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 2 months ago
 - PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆35Updated last year
 - ☆53Updated 4 months ago
 - Pie: Programmable LLM Serving☆51Updated this week