alibaba / MNNLinks
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/README.md). MNN TaoAvatar Android - Local 3D Avatar Intelligence: apps/Android/Mnn3dAvatar/README.md
☆12,829Updated last week
Alternatives and similar repositories for MNN
Users that are interested in MNN are comparing it to the libraries listed below
Sorting:
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,569Updated 3 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,919Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,023Updated last year
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,487Updated 5 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,540Updated this week
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,142Updated 2 months ago
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆2,998Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,039Updated this week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,361Updated this week
- 🛠 A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉☆4,216Updated this week
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,913Updated 2 years ago
- NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone…☆6,022Updated last year
- Simplify your onnx model☆4,146Updated 11 months ago
- MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架☆4,804Updated 9 months ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆954Updated 4 months ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,086Updated this week
- MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.☆4,588Updated last year
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,543Updated 5 years ago
- 💎1MB lightweight face detection model (1MB轻量级人脸检测模型)☆7,412Updated last year
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,644Updated this week
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,218Updated 5 years ago
- High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle☆3,459Updated this week
- A high performance and generic framework for distributed DNN training☆3,695Updated last year
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,199Updated 3 weeks ago
- Google Brain AutoML☆6,391Updated 5 months ago
- AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.☆2,427Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,866Updated this week
- Visualizer for neural network, deep learning and machine learning models☆31,201Updated this week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,722Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,136Updated 3 weeks ago