alibaba / MNNLinks
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/README.md)
☆11,103Updated this week
Alternatives and similar repositories for MNN
Users that are interested in MNN are comparing it to the libraries listed below
Sorting:
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,522Updated 3 weeks ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,541Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,011Updated 11 months ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,093Updated last week
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,884Updated 2 years ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,640Updated last week
- Open standard for machine learning interoperability☆19,003Updated this week
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,220Updated 5 years ago
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆16,742Updated this week
- Simplify your onnx model☆4,088Updated 8 months ago
- Visualizer for neural network, deep learning and machine learning models☆30,338Updated this week
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,162Updated 2 weeks ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆946Updated last month
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,319Updated this week
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆2,866Updated 2 weeks ago
- MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架☆4,791Updated 7 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,541Updated 5 years ago
- Tutorials for creating and using ONNX models☆3,529Updated 10 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,255Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,079Updated 2 weeks ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,974Updated 2 weeks ago
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,426Updated 3 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,796Updated this week
- An easy to use PyTorch to TensorRT converter☆4,741Updated 9 months ago
- 超轻量级中文ocr,支持竖排文字识别, 支持ncnn、mnn、tnn推理 ( dbnet(1.8M) + crnn(2.5M) + anglenet(378KB)) 总模型仅4.7M☆12,120Updated last year
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,348Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,809Updated last year
- Transformer related optimization, including BERT, GPT☆6,173Updated last year
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,030Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,762Updated this week