parrotsky / AutoDiCELinks
distributed CNN inference at the edge, extend ncnn with CUDA, MPI+OPENMP support.
☆21Updated 4 months ago
Alternatives and similar repositories for AutoDiCE
Users that are interested in AutoDiCE are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated 2 years ago
- ☆78Updated 2 years ago
- PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices☆37Updated last year
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆30Updated 3 years ago
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆58Updated 11 months ago
- Quantize pytorch model, support post-training quantization and quantization aware training methods☆14Updated 2 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆25Updated 3 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆88Updated 5 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- ☆36Updated 6 years ago
- MobiSys#114☆22Updated 2 years ago
- Jetson embedded platform-target deep learning inference acceleration framework with TensorRT☆29Updated 3 months ago
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- Fast NPU-aware Neural Architecture Search☆22Updated 4 years ago
- About DNN compression and acceleration on Edge Devices.☆57Updated 4 years ago
- TQT's pytorch implementation.☆21Updated 4 years ago
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆22Updated 6 years ago
- My name is Fang Biao. I'm currently pursuing my Master degree with the college of Computer Science and Engineering, Si Chuan University, …☆53Updated 2 years ago
- Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts☆132Updated last year
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆89Updated 3 years ago
- Optimize tensor program fast with Felix, a gradient descent autotuner.☆29Updated last year
- A Winograd Minimal Filter Implementation in CUDA☆28Updated 4 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆54Updated last year
- ☆37Updated 3 years ago
- CSV spreadsheets and other material for AI accelerator survey papers☆186Updated last month
- An external memory allocator example for PyTorch.☆16Updated 5 months ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆74Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 5 years ago
- Codebase for the Progressive Mixed-Precision Decoding paper.☆19Updated 5 months ago