mindspore-ai / mindsporeView external linksLinks
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
☆4,671Jul 29, 2024Updated last year
Alternatives and similar repositories for mindspore
Users that are interested in mindspore are comparing it to the libraries listed below
Sorting:
- AKG (Auto Kernel Generator) is an optimizer for operators in Deep Learning Networks, which provides the ability to automatically fuse ops…☆245Dec 13, 2025Updated 2 months ago
- MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架☆4,812Oct 24, 2024Updated last year
- Open Machine Learning Compiler Framework☆13,117Updated this week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,392Dec 4, 2025Updated 2 months ago
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,222Jan 27, 2026Updated 2 weeks ago
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,672Feb 4, 2026Updated last week
- PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)☆23,626Updated this week
- MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM …☆14,104Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,578Feb 7, 2026Updated last week
- A high performance and generic framework for distributed DNN training☆3,717Oct 3, 2023Updated 2 years ago
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,620May 9, 2025Updated 9 months ago
- ncnn is a high-performance neural network inference framework optimized for the mobile platform☆22,771Updated this week
- Ongoing research training transformer models at scale☆15,162Updated this week
- Development repository for the Triton language and compiler☆18,387Updated this week
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304May 16, 2023Updated 2 years ago
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,505Mar 6, 2025Updated 11 months ago
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,668Dec 1, 2025Updated 2 months ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 6 months ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆956Apr 11, 2025Updated 10 months ago
- Open standard for machine learning interoperability☆20,295Updated this week
- oneAPI Deep Neural Network Library (oneDNN)☆3,960Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,334Feb 6, 2026Updated last week
- AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。☆743Sep 23, 2022Updated 3 years ago
- A primitive library for neural network☆1,368Nov 24, 2024Updated last year
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆97,298Updated this week
- Fast and memory-efficient exact attention☆22,231Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,910Jan 26, 2026Updated 2 weeks ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆34,848Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,266Updated this week
- Visualizer for neural network, deep learning and machine learning models☆32,383Updated this week
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,176Updated this week
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,036Jun 17, 2024Updated last year
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆19,276Updated this week
- 《Machine Learning Systems: Design and Implementation》- Chinese Version☆4,760Apr 13, 2024Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆916Dec 30, 2024Updated last year
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,619Updated this week
- An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model c…☆14,342Jul 3, 2024Updated last year