huawei-noah / boltLinks
Bolt is a deep learning library with high performance and heterogeneous flexibility.
☆952Updated 6 months ago
Alternatives and similar repositories for bolt
Users that are interested in bolt are comparing it to the libraries listed below
Sorting:
- A library for high performance deep learning inference on NVIDIA GPUs.☆557Updated 3 years ago
- A primitive library for neural network☆1,363Updated 10 months ago
- High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.☆534Updated 3 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- EasyQuant(EQ) is an efficient and simple post-training quantization method via effectively optimizing the scales of weights and activatio…☆405Updated 2 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆994Updated last year
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆511Updated 11 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,546Updated 6 years ago
- Model Quantization Benchmark☆842Updated 6 months ago
- TensorRT Plugin Autogen Tool☆368Updated 2 years ago
- ☆1,037Updated last year
- Benchmarking Neural Network Inference on Mobile Devices☆383Updated 2 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆899Updated 9 months ago
- Dive into Deep Learning Compiler☆646Updated 3 years ago
- Everything in Torch Fx☆344Updated last year
- heterogeneity-aware-lowering-and-optimization☆256Updated last year
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Updated 11 months ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆851Updated 2 months ago
- TVM integration into PyTorch☆454Updated 5 years ago
- Deploy your model with TensorRT quickly.☆765Updated last year
- row-major matmul optimization☆682Updated 2 months ago
- A parser, editor and profiler tool for ONNX models.☆458Updated 2 months ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆806Updated last year
- Generate a quantization parameter file for ncnn framework int8 inference☆517Updated 5 years ago
- ☆669Updated 4 years ago
- FeatherCNN is a high performance inference engine for convolutional neural networks.☆1,221Updated 6 years ago
- MNN applications by MNN, JNI exec, RK3399. Support tflite\tensorflow\caffe\onnx models.☆509Updated 6 years ago
- micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantiz…☆2,260Updated 5 months ago
- 服务侧深度学习部署案例☆454Updated 5 years ago
- A simple network quantization demo using pytorch from scratch.☆538Updated 2 years ago