PaddlePaddle / Anakin
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
☆532Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Anakin
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆816Updated this week
- TVM integration into PyTorch☆452Updated 4 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,210Updated 5 years ago
- heterogeneity-aware-lowering-and-optimization☆253Updated 10 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆962Updated 2 months ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆203Updated 3 years ago
- Place for meetup slides☆140Updated 4 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,528Updated 5 years ago
- To make it easy to benchmark AI accelerators☆179Updated last year
- Dive into Deep Learning Compiler☆643Updated 2 years ago
- Benchmarking Neural Network Inference on Mobile Devices☆360Updated last year
- Generate a quantization parameter file for ncnn framework int8 inference☆521Updated 4 years ago
- Mobile AI Compute Engine Model Zoo☆371Updated 3 years ago
- ☆209Updated 6 years ago
- ☆123Updated 3 years ago
- ☆568Updated 6 years ago
- ☆127Updated 6 years ago
- DeepLearning Framework Performance Profiling Toolkit☆277Updated 2 years ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆692Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆918Updated 3 months ago
- Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.☆374Updated 4 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆547Updated 2 years ago
- A quick view of high-performance convolution neural networks (CNNs) inference engines on mobile devices.☆150Updated 2 years ago
- Low-precision matrix multiplication☆1,780Updated 9 months ago
- WeChat: NeuralTalk,Weekly report and awesome list of embedded-ai.☆375Updated 2 years ago
- tensorflow源码阅读笔记☆189Updated 6 years ago
- Winograd minimal convolution algorithm generator for convolutional neural networks.☆605Updated 4 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,210Updated this week
- Embedded and Mobile Deployment☆71Updated 6 years ago
- A performant and modular runtime for TensorFlow☆756Updated last month