parrotsky / AutoDiCE
distributed CNN inference at the edge, extend ncnn with CUDA, MPI+OPENMP support.
☆18Updated last year
Alternatives and similar repositories for AutoDiCE:
Users that are interested in AutoDiCE are comparing it to the libraries listed below
- This is a list of awesome edgeAI inference related papers.☆92Updated last year
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆81Updated 4 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆29Updated last year
- To deploy Transformer models in CV to mobile devices.☆17Updated 3 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆21Updated 4 years ago
- ☆76Updated last year
- MobiSys#114☆21Updated last year
- CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution☆16Updated last year
- ☆21Updated last year
- Multi-branch model for concurrent execution☆17Updated last year
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆51Updated last year
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆29Updated 2 years ago
- InFi is a library for building input filters for resource-efficient inference.☆37Updated last year
- ☆14Updated 6 months ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- ☆39Updated 4 years ago
- Open-source artifacts and codes of our MICRO'23 paper titled “Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse Multi…☆34Updated last year
- ☆99Updated last year
- About DNN compression and acceleration on Edge Devices.☆55Updated 3 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆10Updated last year
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆80Updated 2 years ago
- ☆19Updated 2 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆13Updated 3 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆26Updated 4 years ago
- ☆10Updated 3 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆42Updated 5 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ☆19Updated 2 years ago