XiaoMi / mace
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
☆4,999Updated 9 months ago
Alternatives and similar repositories for mace:
Users that are interested in mace are comparing it to the libraries listed below
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,150Updated this week
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆10,050Updated this week
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,479Updated this week
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,217Updated 5 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,538Updated 5 years ago
- Caffe2 is a lightweight, modular, and scalable deep learning framework.☆8,419Updated 2 years ago
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,051Updated 2 months ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,816Updated last year
- oneAPI Deep Neural Network Library (oneDNN)☆3,747Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,810Updated 9 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,134Updated this week
- A high performance and generic framework for distributed DNN training☆3,668Updated last year
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,450Updated 2 weeks ago
- Largest multi-label image database; ResNet-101 model; 80.73% top-1 acc on ImageNet☆3,063Updated 2 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆533Updated 2 years ago
- Mobile AI Compute Engine Model Zoo☆373Updated 3 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆940Updated 7 months ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆2,937Updated 2 weeks ago
- Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Juli…☆20,790Updated last year
- ☆4,623Updated 4 years ago
- Deep Learning GPU Training System☆4,175Updated 2 months ago
- Low-precision matrix multiplication☆1,794Updated last year
- Open standard for machine learning interoperability☆18,658Updated this week
- An open autonomous driving platform☆25,597Updated 2 months ago
- Compiler for Neural Network hardware accelerators☆3,271Updated 10 months ago
- Caffe: a fast open framework for deep learning.☆34,257Updated 7 months ago
- header only, dependency-free deep learning framework in C++14☆5,894Updated 2 years ago
- A Flexible and Powerful Parameter Server for large-scale machine learning☆6,751Updated last year
- nGraph has moved to OpenVINO☆1,350Updated 4 years ago
- Build and run Docker containers leveraging NVIDIA GPUs☆17,336Updated last year