parrotsky / AutoDiCE
distributed CNN inference at the edge, extend ncnn with CUDA, MPI+OPENMP support.
☆18Updated last year
Alternatives and similar repositories for AutoDiCE:
Users that are interested in AutoDiCE are comparing it to the libraries listed below
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆30Updated last year
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆83Updated 5 years ago
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- MobiSys#114☆21Updated last year
- To deploy Transformer models in CV to mobile devices.☆17Updated 3 years ago
- Open-source artifacts and codes of our MICRO'23 paper titled “Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse Multi…☆35Updated last year
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- Multi-branch model for concurrent execution☆17Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆18Updated 2 years ago
- RESPECT: Reinforcement Learning based Edge Scheduling on Pipelined Coral Edge TPUs (DAC'23)☆10Updated last year
- ☆39Updated 4 years ago
- LLM Inference analyzer for different hardware platforms☆54Updated 2 weeks ago
- ☆77Updated last year
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆29Updated 2 years ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆48Updated 8 months ago
- A system-level domain-specific systems-on-chip simulation framework☆17Updated 2 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- InFi is a library for building input filters for resource-efficient inference.☆38Updated last year
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- ☆36Updated 6 years ago
- About DNN compression and acceleration on Edge Devices.☆56Updated 3 years ago
- ☆35Updated 2 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆27Updated 4 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆55Updated last year
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆10Updated last year
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆53Updated 2 years ago
- ☆29Updated 3 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆12Updated 2 years ago
- ☆39Updated 5 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆108Updated 2 years ago