OAID / Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
☆4,463Updated 2 months ago
Alternatives and similar repositories for Tengine:
Users that are interested in Tengine are comparing it to the libraries listed below
- AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。☆737Updated 2 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,503Updated last month
- TengineKit - Free, Fast, Easy, Real-Time Face Detection & Face Landmarks & Face Attributes & Hand Detection & Hand Landmarks & Body Detec…☆2,299Updated 3 years ago
- A primitive library for neural network☆1,336Updated 5 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,418Updated this week
- ☆883Updated last year
- 🍅🍅🍅YOLOv5-Lite: Evolved from yolov5 and the size of model is only 900+kb (int8) and 1.7M (fp16). Reach 15 FPS on the Raspberry Pi 4B~☆2,370Updated 10 months ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,960Updated last week
- 🛠 A lite C++ AI toolkit: 100+🎉 models (Stable-Diffusion, Face-Fusion, YOLO series, Det, Seg, Matting) with MNN, ORT and TRT.☆4,075Updated last week
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,217Updated 5 years ago
- Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn☆1,254Updated last week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,012Updated 10 months ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆943Updated 3 weeks ago
- MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB☆1,730Updated 4 years ago
- 🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.☆746Updated 2 years ago
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆776Updated 5 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,540Updated 5 years ago
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆612Updated 5 months ago
- 😎 A Collection of Awesome NCNN-based Projects☆738Updated 2 years ago
- 🍅 Deploy ncnn on mobile phones. Support Android and iOS. 移动端ncnn部署,支持Android与iOS。☆1,535Updated 2 years ago
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆2,830Updated last week
- MNN applications by MNN, JNI exec, RK3399. Support tflite\tensorflow\caffe\onnx models.☆506Updated 5 years ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,686Updated last year
- Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is…☆2,048Updated 3 years ago
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆10,662Updated this week
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,590Updated 5 months ago
- ONNX-TensorRT: TensorRT backend for ONNX☆3,065Updated 2 months ago
- ⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio …☆3,175Updated 2 months ago
- OpenMMLab Model Deployment Framework☆2,937Updated 7 months ago
- TVM Documentation in Chinese Simplified / TVM 中文文档☆1,188Updated 3 weeks ago