csu-eis / CoDLLinks
☆78Updated 2 years ago
Alternatives and similar repositories for CoDL
Users that are interested in CoDL are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated 2 years ago
- ☆212Updated 2 years ago
- MobiSys#114☆23Updated 2 years ago
- Multi-branch model for concurrent execution☆18Updated 2 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆19Updated 3 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 5 years ago
- ☆38Updated 7 months ago
- LLM serving cluster simulator☆135Updated last year
- Model-less Inference Serving☆93Updated 2 years ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆37Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Updated last year
- ☆102Updated 2 years ago
- ☆120Updated last week
- This repository is established to store personal notes and annotated papers during daily research.☆180Updated 3 weeks ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆364Updated last year
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆47Updated last week
- ☆52Updated 3 years ago
- Summary of some awesome work for optimizing LLM inference☆173Updated 2 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆67Updated last year
- ☆19Updated 3 years ago
- ☆15Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆199Updated 3 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆93Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆44Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Updated 3 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 3 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Updated 3 years ago
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆28Updated 4 years ago