mindspore-ai / mindsporeLinks
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
☆4,627Updated last year
Alternatives and similar repositories for mindspore
Users that are interested in mindspore are comparing it to the libraries listed below
Sorting:
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,213Updated 3 months ago
- MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架☆4,804Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,366Updated 2 months ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,298Updated 2 years ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,758Updated last week
- A high performance and generic framework for distributed DNN training☆3,708Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,331Updated last year
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,534Updated 3 months ago
- A primitive library for neural network☆1,364Updated 11 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,938Updated this week
- TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is …☆4,587Updated 5 months ago
- Ongoing research training transformer models at scale☆13,976Updated this week
- PaddleSlim is an open-source library for deep model compression and architecture search.☆1,609Updated last week
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆953Updated 6 months ago
- PyTorch extensions for high performance and large scale training.☆3,384Updated 6 months ago
- 飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。☆474Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,538Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,832Updated this week
- Bagua Speeds up PyTorch☆882Updated last year
- ONNX-TensorRT: TensorRT backend for ONNX☆3,159Updated last month
- OpenMMLab Model Deployment Framework☆3,055Updated last year
- Tengine is a lite, high performance, modular inference engine for embedded device☆4,493Updated 7 months ago
- Deep Learning Visualization Toolkit(『飞桨』深度学习可视化工具 )☆4,848Updated 9 months ago
- PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)☆23,343Updated this week
- Several simple examples for popular neural network toolkits calling custom CUDA operators.☆1,515Updated 4 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,231Updated last week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,276Updated last month
- A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)☆917Updated 5 months ago
- 《Machine Learning Systems: Design and Implementation》- Chinese Version☆4,669Updated last year
- MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.☆5,030Updated last year