XiaoMi / mace
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
☆4,935Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for mace
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba☆8,749Updated this week
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,416Updated last month
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,210Updated 5 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆6,969Updated this week
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,788Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆20,503Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,528Updated 5 years ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,796Updated 5 months ago
- A high performance and generic framework for distributed DNN training☆3,630Updated last year
- It is open source ebook about TensorFlow kernel and implementation mechanism.☆2,897Updated last year
- Tutorial code on how to build your own Deep Learning System in 2k Lines☆2,002Updated 6 years ago
- Mobile AI Compute Engine Model Zoo☆371Updated 3 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆11,798Updated this week
- A Flexible and Powerful Parameter Server for large-scale machine learning☆6,743Updated 10 months ago
- PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)☆22,268Updated this week
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆532Updated 2 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆360Updated last year
- MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.☆4,310Updated 3 months ago
- Largest multi-label image database; ResNet-101 model; 80.73% top-1 acc on ImageNet☆3,054Updated 2 years ago
- The convertor/conversion of deep learning models for different deep learning frameworks/softwares.☆3,243Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆918Updated 3 months ago
- ☆4,612Updated 4 years ago
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,096Updated last week
- Deep Learning 101 with PaddlePaddle (『飞桨』深度学习框架入门教程)☆2,740Updated 3 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,861Updated this week
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,653Updated 2 months ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,351Updated last year
- ☆1,655Updated 6 years ago
- An open source library for face detection in images. The face detection speed can reach 1000FPS.☆12,321Updated last month
- SeetaFace 2: open source, full stack face recognization toolkit.☆2,138Updated 8 months ago