apache / tvmLinks
Open deep learning compiler stack for cpu, gpu and specialized accelerators
☆12,540Updated this week
Alternatives and similar repositories for tvm
Users that are interested in tvm are comparing it to the libraries listed below
Sorting:
- oneAPI Deep Neural Network Library (oneDNN)☆3,866Updated this week
- Compiler for Neural Network hardware accelerators☆3,311Updated last year
- Open standard for machine learning interoperability☆19,440Updated last week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,571Updated 3 weeks ago
- Development repository for the Triton language and compiler☆16,568Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,644Updated last week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,301Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,086Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,809Updated 2 weeks ago
- Optimized primitives for collective multi-GPU communication☆3,964Updated last week
- a language for fast, portable data-parallel computation☆6,154Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,426Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,039Updated this week
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆21,919Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,484Updated last week
- Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Juli…☆20,818Updated last year
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,023Updated last year
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆17,563Updated this week
- Transformer related optimization, including BERT, GPT☆6,274Updated last year
- Tutorials for creating and using ONNX models☆3,586Updated last year
- CUDA Templates for Linear Algebra Subroutines☆8,249Updated last week
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,401Updated 2 years ago
- ☆1,904Updated 2 years ago
- The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologi…☆3,027Updated last week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆33,198Updated this week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,361Updated last week
- A high performance and generic framework for distributed DNN training☆3,695Updated last year
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,722Updated last week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,752Updated 4 years ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,767Updated 2 weeks ago