mindspore-lab / mindrecLinks
MindSpore large-scale recommender system library.
☆10Updated last year
Alternatives and similar repositories for mindrec
Users that are interested in mindrec are comparing it to the libraries listed below
Sorting:
- ☆45Updated last year
- ☆166Updated this week
- PyTorch distributed training acceleration framework☆49Updated 3 months ago
- ☆127Updated 5 months ago
- ☆34Updated 5 months ago
- ☆79Updated last year
- 基于MindSpore的TinyRAG实现☆16Updated 5 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆281Updated this week
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆94Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆81Updated 3 weeks ago
- ☆332Updated 4 months ago
- FlagPerf is an open-source software platform for benchmarking AI chips.☆333Updated last week
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Pipeline Parallelism Emulation and Visualization☆40Updated 2 weeks ago
- alibabacloud-aiacc-demo☆43Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- PaddlePaddle Developer Community☆111Updated this week
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning☆91Updated 2 years ago
- ☆148Updated 4 months ago
- The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes☆44Updated 3 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- ☆13Updated this week
- ☆49Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆139Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆371Updated this week
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接 入实现)☆84Updated this week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆112Updated last year
- ☆63Updated this week