MegEngine / MegPeakLinks
☆249Updated last year
Alternatives and similar repositories for MegPeak
Users that are interested in MegPeak are comparing it to the libraries listed below
Sorting:
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆485Updated 7 months ago
- mperf是一个面向移动/嵌入式平台的算子性能调优工具箱☆187Updated last year
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆120Updated this week
- ☆96Updated 3 years ago
- This is an implementation of sgemm_kernel on L1d cache.☆227Updated last year
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- arm-neon☆90Updated 10 months ago
- symmetric int8 gemm☆66Updated 4 years ago
- Fork of https://source.codeaurora.org/quic/hexagon_nn/nnlib☆57Updated 2 years ago
- examples for tvm schedule API☆102Updated last year
- A CPU tool for benchmarking the peak of floating points☆544Updated 3 weeks ago
- code reading for tvm☆76Updated 3 years ago
- CUDA PTX-ISA Document 中文翻译版☆41Updated this week
- AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。☆739Updated 2 years ago
- ☆148Updated 4 months ago
- Tengine Convert Tool supports converting multi framworks' models into tmfile that suitable for Tengine-Lite AI framework.☆93Updated 3 years ago
- Common libraries for PPL projects☆29Updated 2 months ago
- Compiler Infrastructure for Neural Networks☆145Updated last year
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆503Updated 7 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆239Updated 2 weeks ago
- Yinghan's Code Sample☆329Updated 2 years ago
- MegEngine到其他框架的转换器☆69Updated 2 years ago
- ☆36Updated 7 months ago
- heterogeneity-aware-lowering-and-optimization☆254Updated last year
- ☆142Updated 5 months ago
- Edge Machine Learning Library☆194Updated 2 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.☆204Updated 4 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- A model compilation solution for various hardware☆437Updated 3 weeks ago