xumengwei / DeepCacheLinks
Cache design for CNN on mobile
☆32Updated 6 years ago
Alternatives and similar repositories for DeepCache
Users that are interested in DeepCache are comparing it to the libraries listed below
Sorting:
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆82Updated 5 years ago
- ☆36Updated 7 years ago
- ☆77Updated 2 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 4 years ago
- ☆19Updated 3 years ago
- ☆46Updated 2 years ago
- FilterForward: Scaling Video Analytics on Constrained Edge Nodes☆28Updated 5 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆27Updated 4 years ago
- ☆116Updated 6 years ago
- ☆30Updated 2 years ago
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- ☆130Updated last year
- Server-driven Video Streaming for Deep Learning Inference☆94Updated 3 years ago
- ☆57Updated 3 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- ☆14Updated 3 years ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 3 years ago
- (ICPP '20) ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference☆12Updated 5 years ago
- ☆203Updated last year
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆106Updated 3 years ago
- Systems and Networking related Video research published in major venues of Computer Science.☆159Updated 2 years ago
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆16Updated 5 years ago
- MobiSys#114☆21Updated last year
- About DNN compression and acceleration on Edge Devices.☆55Updated 4 years ago
- Model-less Inference Serving☆88Updated last year
- This is the Group-Meeting collections of HKUST System NetworkING (SING) Research Group.☆27Updated 5 years ago
- ☆20Updated 2 years ago
- Adaptive Wide-Area Streaming Analytics☆30Updated 6 years ago
- ☆17Updated 5 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆28Updated 4 years ago