apache / tvmLinks
Open deep learning compiler stack for cpu, gpu and specialized accelerators
☆12,492Updated this week
Alternatives and similar repositories for tvm
Users that are interested in tvm are comparing it to the libraries listed below
Sorting:
- Compiler for Neural Network hardware accelerators☆3,310Updated last year
- Open standard for machine learning interoperability☆19,345Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,912Updated last week
- oneAPI Deep Neural Network Library (oneDNN)☆3,856Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,558Updated this week
- Development repository for the Triton language and compiler☆16,320Updated this week
- Transformer related optimization, including BERT, GPT☆6,261Updated last year
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,843Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆17,369Updated this week
- Optimized primitives for collective multi-GPU communication☆3,889Updated last week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,241Updated last week
- Simplify your onnx model☆4,121Updated 10 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,383Updated this week
- a language for fast, portable data-parallel computation☆6,139Updated last week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,647Updated this week
- compiler learning resources collect.☆2,457Updated 4 months ago
- Tutorials for creating and using ONNX models☆3,577Updated last year
- Low-precision matrix multiplication☆1,812Updated last year
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,810Updated 2 weeks ago
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,017Updated last year
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,072Updated last week
- A collection of pre-trained, state-of-the-art models in the ONNX format☆8,857Updated last month
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,400Updated 2 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,531Updated last week
- CUDA Templates for Linear Algebra Subroutines☆8,149Updated this week
- ☆1,902Updated 2 years ago
- An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.☆2,910Updated 2 years ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,475Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,752Updated 4 years ago
- ☆1,658Updated 6 years ago