apache / tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
☆12,024Updated this week
Alternatives and similar repositories for tvm:
Users that are interested in tvm are comparing it to the libraries listed below
- Open standard for machine learning interoperability☆18,448Updated this week
- Compiler for Neural Network hardware accelerators☆3,273Updated 9 months ago
- oneAPI Deep Neural Network Library (oneDNN)☆3,726Updated this week
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,385Updated 2 weeks ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,184Updated 2 weeks ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆20,939Updated last week
- Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Juli…☆20,788Updated last year
- a language for fast, portable data-parallel computation☆5,970Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,537Updated 2 weeks ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,278Updated this week
- Development repository for the Triton language and compiler☆14,452Updated this week
- MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Co…☆5,804Updated 8 months ago
- Transformer related optimization, including BERT, GPT☆6,022Updated 10 months ago
- Optimized primitives for collective multi-GPU communication☆3,463Updated 3 weeks ago
- CUDA Templates for Linear Algebra Subroutines☆6,210Updated last week
- Simplify your onnx model☆3,976Updated 5 months ago
- Caffe2 is a lightweight, modular, and scalable deep learning framework.☆8,419Updated 2 years ago
- Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distille…☆4,369Updated last year
- Caffe: a fast open framework for deep learning.☆34,224Updated 6 months ago
- ☆1,656Updated 6 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆8,752Updated this week
- Low-precision matrix multiplication☆1,790Updated last year
- NumPy & SciPy for GPU☆9,899Updated this week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,737Updated 3 years ago
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆9,666Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆4,993Updated 8 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,963Updated this week
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,485Updated 4 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,676Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆1,951Updated this week