fredrickang / LaLaRANDLinks
LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks
☆15Updated 3 years ago
Alternatives and similar repositories for LaLaRAND
Users that are interested in LaLaRAND are comparing it to the libraries listed below
Sorting:
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆143Updated 3 months ago
- ☆38Updated 3 months ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆73Updated 4 months ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆35Updated last year
- ☆52Updated 9 months ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 2 months ago
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- ☆78Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- ☆24Updated 3 years ago
- ☆87Updated last week
- This repository is established to store personal notes and annotated papers during daily research.☆155Updated 2 weeks ago
- ☆16Updated 3 weeks ago
- ☆51Updated 2 years ago
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆95Updated last year
- LLM serving cluster simulator☆116Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆147Updated 8 months ago
- ☆194Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆101Updated 2 years ago
- ☆209Updated last year
- ☆15Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆26Updated 4 years ago
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 5 months ago
- ☆40Updated 2 years ago
- ☆130Updated last week
- ☆25Updated 2 years ago
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆38Updated last year
- ☆27Updated 10 months ago
- LLM Inference analyzer for different hardware platforms☆94Updated 3 months ago