wenh18 / AdaptiveNet_artifact
☆14Updated last year
Alternatives and similar repositories for AdaptiveNet_artifact:
Users that are interested in AdaptiveNet_artifact are comparing it to the libraries listed below
- ☆17Updated last year
- ☆199Updated last year
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- ☆9Updated last year
- MobiSys#114☆21Updated last year
- ☆98Updated last year
- ☆77Updated last year
- ☆21Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆105Updated 3 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- ☆56Updated 3 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆30Updated last year
- To deploy Transformer models in CV to mobile devices.☆17Updated 3 years ago
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆33Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Source code for Jellyfish, a soft real-time inference serving system☆12Updated 2 years ago
- Miro[ACM MobiCom '23] Cost-effective On-device Continual Learning over Memory Hierarchy with Miro☆14Updated last year
- Pytorch-based early exit network inspired by branchynet☆31Updated 2 weeks ago
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆44Updated 7 months ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 3 years ago
- ☆10Updated 3 years ago
- LegoDNN: a block-grained scaling tool for mobile vision systems☆51Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- InFi is a library for building input filters for resource-efficient inference.☆38Updated last year
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆26Updated 4 years ago
- ☆45Updated 2 years ago
- PyTorch implementation of the paper: Decomposing Vision Transformers for Collaborative Inference in Edge Devices☆12Updated 8 months ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆30Updated 8 months ago
- Our unique contributions are in tools/train/benchmark.☆19Updated 2 years ago