jeho-lee / Awesome-On-Device-AI-SystemsLinks
☆52Updated 2 weeks ago
Alternatives and similar repositories for Awesome-On-Device-AI-Systems
Users that are interested in Awesome-On-Device-AI-Systems are comparing it to the libraries listed below
Sorting:
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 4 years ago
- ☆202Updated last year
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆33Updated 11 months ago
- ☆21Updated last year
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆119Updated last week
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆108Updated 2 months ago
- ☆99Updated last year
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆54Updated 5 months ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆91Updated last year
- Curated collection of papers in MoE model inference☆197Updated 4 months ago
- ☆109Updated 8 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆16Updated 11 months ago
- ☆154Updated 11 months ago
- ☆13Updated 2 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆49Updated 7 months ago
- LLM serving cluster simulator☆106Updated last year
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆174Updated last year
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆57Updated 11 months ago
- Summary of some awesome work for optimizing LLM inference☆77Updated 3 weeks ago
- Experimental deep learning framework written in Rust☆15Updated 2 years ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- This repository is established to store personal notes and annotated papers during daily research.☆128Updated this week
- Survey Paper List - Efficient LLM and Foundation Models☆248Updated 9 months ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆19Updated 2 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆107Updated 2 weeks ago
- LLM Inference analyzer for different hardware platforms☆74Updated 3 weeks ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆37Updated 2 months ago
- ☆59Updated last year