UbiquitousLearning / Mandheling-DSP-TrainingLinks
The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]
☆19Updated 2 years ago
Alternatives and similar repositories for Mandheling-DSP-Training
Users that are interested in Mandheling-DSP-Training are comparing it to the libraries listed below
Sorting:
- Artifacts of EVT ASPLOS'24☆25Updated last year
- MobiSys#114☆21Updated last year
- Compiler for Dynamic Neural Networks☆46Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆51Updated last year
- ☆77Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 9 months ago
- ☆79Updated 2 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆47Updated 2 months ago
- DietCode Code Release☆64Updated 2 years ago
- LLM Inference analyzer for different hardware platforms☆69Updated last week
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated last year
- ☆19Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆46Updated 6 months ago
- ☆38Updated 10 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Updated last year
- LLM serving cluster simulator☆102Updated last year
- Multi-branch model for concurrent execution☆17Updated last year
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- ☆99Updated last year
- ASPLOS'24: Optimal Kernel Orchestration for Tensor Programs with Korch☆35Updated 2 months ago
- PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.☆19Updated last month
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆25Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆40Updated 5 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year
- ☆31Updated 2 years ago
- An experimental parallel training platform☆54Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 3 years ago
- ☆21Updated last year
- ☆27Updated last year
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆110Updated 2 years ago