OpenPPL / ppl.nnLinks
A primitive library for neural network
☆1,344Updated 7 months ago
Alternatives and similar repositories for ppl.nn
Users that are interested in ppl.nn are comparing it to the libraries listed below
Sorting:
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆507Updated 8 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆877Updated 6 months ago
- TensorRT Plugin Autogen Tool☆369Updated 2 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Updated 8 months ago
- row-major matmul optimization☆647Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆990Updated 9 months ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆553Updated 3 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆949Updated 3 months ago
- ☆1,031Updated last year
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,705Updated last year
- Simple samples for TensorRT programming☆1,623Updated last month
- ☆611Updated last year
- Adlik: Toolkit for Accelerating Deep Learning Inference☆801Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,093Updated last year
- ☆1,897Updated last year
- Deploy your model with TensorRT quickly.☆768Updated last year
- Machine learning compiler based on MLIR for Sophgo TPU.☆757Updated last week
- compiler learning resources collect.☆2,445Updated 3 months ago
- Model Quantization Benchmark☆820Updated 2 months ago
- how to learn PyTorch and OneFlow☆441Updated last year
- ☆291Updated 3 years ago
- how to optimize some algorithm in cuda.☆2,317Updated this week
- Several simple examples for popular neural network toolkits calling custom CUDA operators.☆1,484Updated 4 years ago
- Easy-to-use, high-performance, multi-platform inference deployment framework☆1,061Updated this week
- Dive into Deep Learning Compiler☆646Updated 3 years ago
- TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.☆836Updated last month
- A CPU tool for benchmarking the peak of floating points☆556Updated last week
- A parser, editor and profiler tool for ONNX models.☆445Updated last month
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- A model compilation solution for various hardware☆438Updated 3 weeks ago