A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.
☆1,006Sep 19, 2024Updated last year
Alternatives and similar repositories for nnfusion
Users that are interested in nnfusion are comparing it to the libraries listed below
Sorting:
- The Tensor Algebra SuperOptimizer for Deep Learning☆739Jan 26, 2023Updated 3 years ago
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,731Oct 19, 2024Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆918Dec 30, 2024Updated last year
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆199Apr 27, 2022Updated 3 years ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆182Apr 25, 2022Updated 3 years ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,754Updated this week
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆125Jun 23, 2022Updated 3 years ago
- ☆422Jan 4, 2026Updated last month
- Dive into Deep Learning Compiler☆645Jun 19, 2022Updated 3 years ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆486Oct 23, 2024Updated last year
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆976Feb 20, 2026Updated last week
- Open Machine Learning Compiler Framework☆13,142Updated this week
- A performant and modular runtime for TensorFlow☆753Sep 4, 2025Updated 5 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,861Feb 20, 2026Updated last week
- An open-source efficient deep learning framework/compiler, written in python.☆737Sep 4, 2025Updated 5 months ago
- A primitive library for neural network☆1,366Nov 24, 2024Updated last year
- A retargetable MLIR-based machine learning compiler and runtime toolkit.☆3,614Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Mar 15, 2024Updated last year
- ☆1,992Jul 29, 2023Updated 2 years ago
- A model compilation solution for various hardware☆464Aug 20, 2025Updated 6 months ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,315Updated this week
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆121Oct 26, 2022Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆143Mar 31, 2023Updated 2 years ago
- AKG (Auto Kernel Generator) is an optimizer for operators in Deep Learning Networks, which provides the ability to automatically fuse ops…☆245Dec 13, 2025Updated 2 months ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆570Apr 20, 2023Updated 2 years ago
- compiler learning resources collect.☆2,684Mar 19, 2025Updated 11 months ago
- ☆192Mar 28, 2023Updated 2 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,534Updated this week
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Jul 23, 2024Updated last year
- The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs☆1,349Apr 14, 2025Updated 10 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- heterogeneity-aware-lowering-and-optimization☆257Jan 20, 2024Updated 2 years ago
- ☆145Jan 30, 2025Updated last year
- DietCode Code Release☆65Jul 21, 2022Updated 3 years ago
- common in-memory tensor structure☆1,169Jan 26, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,706Jan 12, 2026Updated last month
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- A home for the final text of all TVM RFCs.☆109Sep 24, 2024Updated last year