csu-eis / CoDL
☆77Updated last year
Alternatives and similar repositories for CoDL:
Users that are interested in CoDL are comparing it to the libraries listed below
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- ☆37Updated 3 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- ☆196Updated last year
- Multi-branch model for concurrent execution☆17Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- ☆14Updated 6 months ago
- Compiler for Dynamic Neural Networks☆45Updated last year
- ☆17Updated last year
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆29Updated 7 months ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- MobiSys#114☆21Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 6 months ago
- ☆46Updated 2 months ago
- ☆49Updated 2 years ago
- LLM serving cluster simulator☆93Updated 10 months ago
- ☆56Updated 3 years ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆148Updated 4 months ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆12Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs☆53Updated last year
- ☆45Updated 2 years ago
- This repository is established to store personal notes and annotated papers during daily research.☆112Updated last week
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆92Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 10 months ago
- InFi is a library for building input filters for resource-efficient inference.☆37Updated last year
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆16Updated 4 years ago
- ☆39Updated 4 years ago
- ☆14Updated last year
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆30Updated last year