ztt-21 / zTT
zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation
☆25Updated 3 years ago
Alternatives and similar repositories for zTT:
Users that are interested in zTT are comparing it to the libraries listed below
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆32Updated 9 months ago
- ☆14Updated 3 years ago
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆112Updated 2 months ago
- ☆36Updated 2 weeks ago
- ☆24Updated last year
- ☆77Updated last year
- MobiSys#114☆21Updated last year
- LLM serving cluster simulator☆99Updated last year
- This is the proof-of-concept CPU implementation of ASPEN used for the NeurIPS'23 paper ASPEN: Breaking Operator Barriers for Efficient Pa…☆11Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- ☆48Updated 4 months ago
- ☆49Updated 2 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆13Updated 3 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- Cache design for CNN on mobile☆32Updated 6 years ago
- ☆201Updated last year
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆11Updated last year
- ☆66Updated last month
- ☆14Updated 8 months ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆37Updated last year
- ☆37Updated 3 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆80Updated 10 months ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 4 years ago
- ☆16Updated last year
- ☆40Updated 4 years ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆29Updated last year