OAID / TengineLinks
Tengine is a lite, high performance, modular inference engine for embedded device
☆4,494Updated 8 months ago
Alternatives and similar repositories for Tengine
Users that are interested in Tengine are comparing it to the libraries listed below
Sorting:
- AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。☆742Updated 3 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,589Updated 6 months ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆953Updated 6 months ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,222Updated 6 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,031Updated last year
- 🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.☆745Updated 2 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,547Updated 6 years ago
- A primitive library for neural network☆1,366Updated 11 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,240Updated this week
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,068Updated this week
- Arm NN ML Software.☆1,287Updated this week
- 🍅🍅🍅YOLOv5-Lite: Evolved from yolov5 and the size of model is only 900+kb (int8) and 1.7M (fp16). Reach 15 FPS on the Raspberry Pi 4B~☆2,444Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,368Updated 2 months ago
- Simplify your onnx model☆4,222Updated 2 months ago
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆13,414Updated this week
- A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.☆1,570Updated 8 months ago
- Open deep learning compiler stack for Kendryte AI accelerators ✨☆818Updated last week
- MobileNetV2-YoloV3-Nano: 0.5BFlops 3MB HUAWEI P40: 6ms/img, YoloFace-500k:0.1Bflops 420KB☆1,744Updated 4 years ago
- Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is…☆2,071Updated 4 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,912Updated 2 years ago
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,610Updated last week
- A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.☆638Updated 3 months ago
- 😎 A Collection of Awesome NCNN-based Projects☆744Updated 2 years ago
- 一款推理框架,同时有很多有用的demo,觉得好用请点星啊☆2,220Updated last year
- A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)☆919Updated 5 months ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆807Updated last year
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆3,083Updated last week
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,263Updated 6 months ago
- darknet深度学习框架源码分析:详细中文注释,涵盖框架原理与实现语法分析☆1,605Updated 7 years ago
- TVM Documentation in Chinese Simplified / TVM 中文文档☆2,629Updated last month