fredrickang / LaLaRAND
LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks
☆11Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for LaLaRAND
- ☆74Updated last year
- ☆188Updated 10 months ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆25Updated 3 months ago
- This is a list of awesome edgeAI inference related papers.☆88Updated 11 months ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆19Updated 3 years ago
- ☆16Updated last year
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆81Updated 4 years ago
- ☆37Updated 3 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆27Updated 9 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆90Updated this week
- A curated list of early exiting (LLM, CV, NLP, etc)☆29Updated 3 months ago
- ☆41Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆20Updated 3 years ago
- Model-less Inference Serving☆82Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆102Updated 2 years ago
- ☆45Updated last year
- Multi-branch model for concurrent execution☆16Updated last year
- ☆13Updated last year
- LLM serving cluster simulator☆81Updated 6 months ago
- MobiSys#114☆21Updated last year
- Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning☆37Updated 3 years ago
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆26Updated last year
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- ☆48Updated last year
- ☆55Updated 2 years ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆79Updated 4 months ago
- NeuPIMs Simulator☆54Updated 5 months ago
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆15Updated 4 years ago
- ☆10Updated 3 years ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆54Updated 5 months ago