OAID / Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
☆4,427Updated 4 months ago
Alternatives and similar repositories for Tengine:
Users that are interested in Tengine are comparing it to the libraries listed below
- AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。☆732Updated 2 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,443Updated 3 weeks ago
- 🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.☆736Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆933Updated 5 months ago
- 😎 A Collection of Awesome NCNN-based Projects☆721Updated 2 years ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆495Updated 2 months ago
- MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB☆1,719Updated 3 years ago
- A primitive library for neural network☆1,308Updated last month
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆2,681Updated 3 weeks ago
- 🍅🍅🍅YOLOv5-Lite: Evolved from yolov5 and the size of model is only 900+kb (int8) and 1.7M (fp16). Reach 15 FPS on the Raspberry Pi 4B~☆2,299Updated 6 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,533Updated 5 years ago
- Simplify your onnx model☆3,942Updated 4 months ago
- Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is…☆2,028Updated 3 years ago
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,400Updated 2 weeks ago
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,215Updated last week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆6,425Updated this week
- Open deep learning compiler stack for Kendryte AI accelerators ✨☆760Updated this week
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,569Updated last month
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆595Updated last month
- nndeploy is an end-to-end model deployment framework. Based on multi-terminal inference and directed acyclic graph model deployment, it i…☆676Updated this week
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆775Updated 5 years ago
- 🛠 A lite C++ toolkit of 100+ Awesome AI models, support ORT, MNN, NCNN, TNN and TensorRT. 🎉🎉☆3,724Updated 3 weeks ago
- 🍅 Deploy ncnn on mobile phones. Support Android and iOS. 移动端ncnn部署,支持Android与iOS。☆1,502Updated 2 years ago
- ☆848Updated last year
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,610Updated 9 months ago
- A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)☆896Updated 8 months ago
- 一款推理框架,同时有很多有用的demo,觉得好用请点星啊☆2,219Updated 4 months ago
- ⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio …☆3,060Updated this week
- ☆240Updated 2 years ago
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba☆8,926Updated this week