PaddlePaddle / Anakin
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
☆532Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Anakin
- heterogeneity-aware-lowering-and-optimization☆253Updated 9 months ago
- TVM integration into PyTorch☆453Updated 4 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆917Updated 3 months ago
- Generate a quantization parameter file for ncnn framework int8 inference☆521Updated 4 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,211Updated 5 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆815Updated 2 months ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆202Updated 3 years ago
- Place for meetup slides☆140Updated 4 years ago
- To make it easy to benchmark AI accelerators☆179Updated last year
- A library for high performance deep learning inference on NVIDIA GPUs.☆547Updated 2 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆359Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆960Updated last month
- Dive into Deep Learning Compiler☆642Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆278Updated 2 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,527Updated 5 years ago
- Mobile AI Compute Engine Model Zoo☆371Updated 3 years ago
- ☆127Updated 6 years ago
- ☆567Updated 6 years ago
- tensorflow源码阅读笔记☆189Updated 6 years ago
- Symbolic Expression and Statement Module for new DSLs☆204Updated 4 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆601Updated 4 years ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆493Updated last week
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆374Updated 4 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆392Updated last year
- ☆122Updated 3 years ago
- A very fast neural network computing framework optimized for mobile platforms.QQ group: 676883532 【验证信息输:绝影】☆269Updated 6 years ago
- A primitive library for neural network☆1,291Updated this week
- Explore the Capabilities of the TensorRT Platform☆260Updated 3 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago