zoranzhao / DeepThingsLinks
A Portable C Library for Distributed CNN Inference on IoT Edge Clusters
☆82Updated 5 years ago
Alternatives and similar repositories for DeepThings
Users that are interested in DeepThings are comparing it to the libraries listed below
Sorting:
- ☆116Updated 6 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆27Updated 4 years ago
- FilterForward: Scaling Video Analytics on Constrained Edge Nodes☆28Updated 5 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- ☆46Updated 2 years ago
- ☆57Updated 3 years ago
- ☆203Updated last year
- Model-less Inference Serving☆88Updated last year
- ☆77Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆35Updated last year
- ☆51Updated 2 years ago
- ☆11Updated 5 years ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆106Updated 3 years ago
- ☆49Updated 6 months ago
- ☆26Updated 2 years ago
- Metis: Learning to Schedule Long-Running Applications in Shared Container Clusters with at Scale☆18Updated 5 years ago
- Cache design for CNN on mobile☆32Updated 6 years ago
- ☆40Updated 4 years ago
- ☆37Updated 2 weeks ago
- ☆21Updated last year
- a deep learning-driven scheduler for elastic training in deep learning clusters☆30Updated 4 years ago
- Demystifying Fog Systems Using Container-based Benchmarking☆35Updated 5 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆38Updated last year
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆42Updated 7 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆154Updated 5 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆28Updated 4 years ago
- BATCH: Adaptive Batching for Efficient MachineLearning Serving on Serverless Platforms☆10Updated 3 years ago
- ☆14Updated 3 years ago