wenh18 / AdaptiveNetLinks
☆16Updated last year
Alternatives and similar repositories for AdaptiveNet
Users that are interested in AdaptiveNet are comparing it to the libraries listed below
Sorting:
- ☆14Updated 2 years ago
- ☆208Updated last year
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- ☆78Updated 2 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆26Updated 4 years ago
- ☆100Updated last year
- ☆58Updated 3 years ago
- A curated list of awesome projects and papers for AI on Mobile/IoT/Edge devices. Everything is continuously updating. Welcome contributio…☆42Updated 2 years ago
- ☆47Updated 3 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆10Updated 4 years ago
- One-size-fits-all model for mobile AI, a novel paradigm for mobile AI in which the OS and hardware co-manage a foundation model that is c…☆29Updated last year
- ☆46Updated 2 years ago
- MobiSys#114☆22Updated 2 years ago
- Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning☆41Updated 4 years ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆107Updated 3 years ago
- ☆82Updated 3 weeks ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆36Updated last year
- ☆13Updated 5 years ago
- Multi-DNN Inference Engine for Heterogeneous Mobile Processors☆35Updated last year
- A demo of end-to-end federated learning system.☆69Updated 3 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆83Updated 5 years ago
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆15Updated 5 years ago
- Source code for Jellyfish, a soft real-time inference serving system☆13Updated 2 years ago
- Our unique contributions are in tools/train/benchmark.☆19Updated 5 months ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆61Updated last year
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆224Updated last year
- ☆21Updated last year
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆28Updated 4 years ago
- Oort: Efficient Federated Learning via Guided Participant Selection☆128Updated 3 years ago