High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
☆537Sep 23, 2022Updated 3 years ago
Alternatives and similar repositories for Anakin
Users that are interested in Anakin are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,227Sep 24, 2019Updated 6 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆517Jul 29, 2020Updated 5 years ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,037Jun 17, 2024Updated last year
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆375Jul 15, 2020Updated 5 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,141Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- dabnn is an accelerated binary neural networks inference framework for mobile platform☆777Nov 12, 2019Updated 6 years ago
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,521Mar 6, 2025Updated last year
- The benchmark of ncnn that is a high-performance neural network inference framework optimized for the mobile platform☆72Mar 8, 2019Updated 7 years ago
- MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.☆15,068Updated this week
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,548Aug 28, 2019Updated 6 years ago
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆268Jan 4, 2018Updated 8 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆958Apr 11, 2025Updated last year
- Open Machine Learning Compiler Framework☆13,304Updated this week
- PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)☆7,248Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- 适用于移动端的人脸识别模型,计算量与mobilefacenet相同,但megaface上提升了2%+☆232Apr 17, 2020Updated 6 years ago
- Low-precision matrix multiplication☆1,841Jan 29, 2024Updated 2 years ago
- makefile 交叉编译 libmace.a,并能在嵌入式端调用GPU来跑深度学习模型☆96Aug 23, 2018Updated 7 years ago
- ☆16Aug 30, 2024Updated last year
- ☆14Jan 14, 2020Updated 6 years ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆23,151Apr 22, 2026Updated last week
- Macro Continuous Evaluation Platform for Paddle.☆19Mar 11, 2020Updated 6 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,631May 9, 2025Updated 11 months ago
- nGraph has moved to OpenVINO☆1,343Oct 15, 2020Updated 5 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Deep Face Model Compression☆195Aug 21, 2018Updated 7 years ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,984Updated this week
- Acceleration package for neural networks on multi-core CPUs☆1,704Jun 11, 2024Updated last year
- ShuffleNet-V2 for both PyTorch and Caffe.☆505Aug 9, 2018Updated 7 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Oct 16, 2018Updated 7 years ago
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,811Aug 7, 2025Updated 8 months ago
- A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)☆525May 21, 2021Updated 4 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,910Mar 31, 2023Updated 3 years ago
- Daquexian's NNAPI Library. ONNX + Android NNAPI☆350Feb 20, 2020Updated 6 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Reproduction of MobileNetV2 using MXNet☆128Mar 15, 2019Updated 7 years ago
- ☆2,012Jul 29, 2023Updated 2 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆151Jun 13, 2022Updated 3 years ago
- Explore the Capabilities of the TensorRT Platform☆262Aug 23, 2021Updated 4 years ago
- [CVPR 2018] Real-Time Rotation-Invariant Face Detection with Progressive Calibration Networks☆1,086May 11, 2023Updated 2 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆201Feb 18, 2021Updated 5 years ago
- Caffe Implementation of Google's MobileNets (v1 and v2)☆1,274Jun 8, 2021Updated 4 years ago