common in-memory tensor structure
☆1,177Jan 26, 2026Updated last month
Alternatives and similar repositories for dlpack
Users that are interested in dlpack are comparing it to the libraries listed below
Sorting:
- Open Machine Learning Compiler Framework☆13,197Updated this week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,003Sep 19, 2024Updated last year
- Dive into Deep Learning Compiler☆647Jun 19, 2022Updated 3 years ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,770Mar 13, 2026Updated last week
- TVM integration into PyTorch☆456Jan 15, 2020Updated 6 years ago
- A common bricks library for building scalable and portable distributed machine learning.☆878Mar 9, 2026Updated last week
- oneAPI Deep Neural Network Library (oneDNN)☆3,964Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- Development repository for the Triton language and compiler☆18,656Mar 14, 2026Updated last week
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,661Updated this week
- ☆423Feb 24, 2026Updated 3 weeks ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,077Apr 17, 2024Updated last year
- Collective communications library with various primitives for multi-machine training.☆1,405Mar 11, 2026Updated last week
- ☆1,653Sep 11, 2018Updated 7 years ago
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,821Oct 9, 2023Updated 2 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,543Updated this week
- Optimized primitives for collective multi-GPU communication☆4,531Updated this week
- A tensor-aware point-to-point communication primitive for machine learning☆284Dec 17, 2025Updated 3 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆740Jan 26, 2023Updated 3 years ago
- Symbolic Expression and Statement Module for new DSLs☆205Oct 6, 2020Updated 5 years ago
- ☆250Jul 27, 2025Updated 7 months ago
- A domain specific language to express machine learning workloads.☆1,764Apr 28, 2023Updated 2 years ago
- a language for fast, portable data-parallel computation☆6,601Updated this week
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,733Oct 19, 2024Updated last year
- Acceleration package for neural networks on multi-core CPUs☆1,702Jun 11, 2024Updated last year
- Kernel Fusion and Runtime Compilation Based on NNVM☆73Nov 21, 2016Updated 9 years ago
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,642Mar 13, 2026Updated last week
- "Multi-Level Intermediate Representation" Compiler Infrastructure☆1,765Apr 22, 2021Updated 4 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆4,071Updated this week
- Tutorial code on how to build your own Deep Learning System in 2k Lines☆2,014Oct 4, 2018Updated 7 years ago
- Matrix Shadow:Lightweight CPU/GPU Matrix and Tensor Template Library in C++/CUDA for (Deep) Machine Learning☆1,121Aug 4, 2019Updated 6 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆921Dec 30, 2024Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,958Updated this week
- ATen: A TENsor library for C++11☆717Nov 20, 2019Updated 6 years ago
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,679Dec 1, 2025Updated 3 months ago
- Open standard for machine learning interoperability☆20,484Updated this week