geardvfs / GearDVFS
☆9Updated last year
Alternatives and similar repositories for GearDVFS:
Users that are interested in GearDVFS are comparing it to the libraries listed below
- ☆12Updated 5 years ago
- ☆17Updated last year
- This is a list of awesome edgeAI inference related papers.☆91Updated last year
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆29Updated 11 months ago
- INFOCOM 2024: Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference☆9Updated 3 months ago
- PyTorch implementation of the paper: Decomposing Vision Transformers for Collaborative Inference in Edge Devices☆10Updated 6 months ago
- The implementation of paper : RTCoInfer: Real-time Edge-Cloud Collaborative CNN Inference for Stream Analytics on Ubiquitous Images☆14Updated 2 years ago
- 云边协同- collaborative inference📚Dynamic adaptive DNN surgery for inference acceleration on the edge☆31Updated last year
- ☆14Updated last year
- ☆193Updated last year
- Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning☆39Updated 3 years ago
- A DNN model partition demo☆30Updated 5 years ago
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆37Updated 7 months ago
- ☆74Updated last year
- Code for paper "Joint Architecture Design and Workload Partitioning for DNN Inference on Industrial IoT Clusters"☆12Updated last year
- PyTorch implementation of the paper: Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Le…☆35Updated last year
- DNN_Partition辅助工具,用于对pytorch模型进行简单的性能分析以及支持模型切分☆11Updated 3 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆12Updated 2 years ago
- Auto-Split: A General Framework of Collaborative Edge-Cloud AI☆12Updated 3 years ago
- BATCH: Adaptive Batching for Efficient MachineLearning Serving on Serverless Platforms☆9Updated 3 years ago
- ☆56Updated 3 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆26Updated 4 years ago
- 云边协同- collaborative inference📚工作汇总 📚 Collaborative Inference Work Summary☆54Updated 3 weeks ago
- InFi is a library for building input filters for resource-efficient inference.☆37Updated last year
- Source code for Jellyfish, a soft real-time inference serving system☆12Updated 2 years ago
- 云边协同- collaborative inference 📚Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge☆68Updated last year
- ☆21Updated last year
- [TMC'22] SplitPlace: AI Augmented Splitting and Placement of Large-Scale Neural Networks in Mobile Edge Environments☆16Updated 2 years ago
- A curated list of research in System for Edge Intelligence and Computing(Edge MLSys), including Frameworks, Tools, Repository, etc. Paper…☆26Updated 3 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago