OAID / AutoKernelLinks
AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。
☆739Updated 2 years ago
Alternatives and similar repositories for AutoKernel
Users that are interested in AutoKernel are comparing it to the libraries listed below
Sorting:
- ☆253Updated 2 years ago
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,490Updated 6 months ago
- 🔥 (yolov3 yolov4 yolov5 unet ...)A mini pytorch inference framework which inspired from darknet.☆748Updated 2 years ago
- TVM Documentation in Chinese Simplified / TVM 中文文档☆2,294Updated 5 months ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆954Updated 5 months ago
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆93Updated 4 years ago
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆510Updated 10 months ago
- SuperSonic, a new open-source framework to allow compiler developers to integrate RL into compilers easily, regardless of their RL expert…☆121Updated 2 years ago
- Compiler Infrastructure for Neural Networks☆147Updated 2 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆403Updated 2 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆487Updated 10 months ago
- TensorLayerX: A Unified Deep Learning and Reinforcement Learning Framework for All Hardwares, Backends and OS.☆528Updated last month
- A primitive library for neural network☆1,355Updated 9 months ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆806Updated last year
- Pruning Filter in Filter(NeurIPS2020)☆148Updated last year
- TensorRT Plugin Autogen Tool☆367Updated 2 years ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆232Updated 11 months ago
- ☆98Updated 4 years ago
- caffe model convert to onnx model☆176Updated 2 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆558Updated 3 years ago
- SQuant [ICLR22]☆130Updated 2 years ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆534Updated 2 years ago
- row-major matmul optimization☆664Updated 3 weeks ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆994Updated 11 months ago
- TVM tutorial☆66Updated 6 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆893Updated 8 months ago
- A parser, editor and profiler tool for ONNX models.☆456Updated last month
- Dive into Deep Learning Compiler☆647Updated 3 years ago
- This is an implementation of sgemm_kernel on L1d cache.☆229Updated last year