PaddlePaddle / AnakinLinks
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
☆533Updated 2 years ago
Alternatives and similar repositories for Anakin
Users that are interested in Anakin are comparing it to the libraries listed below
Sorting:
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,218Updated 5 years ago
- To make it easy to benchmark AI accelerators☆186Updated 2 years ago
- TVM integration into PyTorch☆453Updated 5 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- ☆125Updated 7 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆954Updated 4 months ago
- Benchmarking Neural Network Inference on Mobile Devices☆380Updated 2 years ago
- heterogeneity-aware-lowering-and-optimization☆256Updated last year
- Place for meetup slides☆141Updated 4 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,543Updated 5 years ago
- Dive into Deep Learning Compiler☆647Updated 3 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆558Updated 3 years ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆805Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆886Updated 7 months ago
- ☆588Updated 7 years ago
- Optimizing Mobile Deep Learning on ARM GPU with TVM☆181Updated 6 years ago
- Mobile AI Compute Engine Model Zoo☆376Updated 4 years ago
- Generate a quantization parameter file for ncnn framework int8 inference☆518Updated 5 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆993Updated 11 months ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆619Updated 4 years ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆509Updated 9 months ago
- Explore the Capabilities of the TensorRT Platform☆264Updated 3 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆58Updated 2 years ago
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆268Updated 7 years ago
- ☆209Updated 7 years ago
- tensorflow源码阅读笔记☆193Updated 6 years ago
- ☆127Updated 4 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 3 years ago
- Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure frame…☆269Updated 6 years ago
- Embedded and Mobile Deployment☆71Updated 7 years ago