CPS-AI / Deep-Compressive-Offloading
Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency
☆27Updated 4 years ago
Alternatives and similar repositories for Deep-Compressive-Offloading:
Users that are interested in Deep-Compressive-Offloading are comparing it to the libraries listed below
- 云边协同- collaborative inference📚Dynamic adaptive DNN surgery for inference acceleration on the edge☆34Updated last year
- [IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and […☆37Updated last year
- Autodidactic Neurosurgeon Collaborative Deep Inference for Mobile Edge Intelligence via Online Learning☆41Updated 3 years ago