Tencent / TNN
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and …
☆4,407Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for TNN
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba☆8,712Updated last week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆20,436Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆4,932Updated 4 months ago
- 🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.☆3,644Updated last week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆5,903Updated this week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,643Updated last month
- The minimal opencv for Android, iOS, ARM Linux, Windows, Linux, MacOS, HarmonyOS, WebAssembly, watchOS, tvOS, visionOS☆2,532Updated last week
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆6,963Updated last month
- A primitive library for neural network☆1,291Updated this week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆915Updated 3 months ago
- 😎 A Collection of Awesome NCNN-based Projects☆711Updated last year
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,786Updated last year
- 🍅 Deploy ncnn on mobile phones. Support Android and iOS. 移动端ncnn部署,支持Android与iOS。☆1,476Updated 2 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,211Updated 5 years ago
- ⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio …☆2,982Updated last month
- ONNX-TensorRT: TensorRT backend for ONNX☆2,948Updated this week
- Simplify your onnx model☆3,849Updated 2 months ago
- 一款推理框架,同时有很多有用的demo,觉得好用请点星啊☆2,220Updated 2 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆11,761Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆10,764Updated this week
- Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is…☆2,006Updated 3 years ago
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,562Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,525Updated 5 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,198Updated last year
- NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone…☆5,753Updated 3 months ago
- Deep Learning Based Free Mobile Real-Time Face Landmark Detector. Contact:jack-yu-business@foxmail.com☆1,682Updated 3 weeks ago
- 💎1MB lightweight face detection model (1MB轻量级人脸检测模型)☆7,170Updated 10 months ago
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆14,642Updated this week
- SeetaFace 2: open source, full stack face recognization toolkit.☆2,138Updated 7 months ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆547Updated 2 years ago