alibaba / EasyParallelLibraryLinks
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
☆271Updated 2 years ago
Alternatives and similar repositories for EasyParallelLibrary
Users that are interested in EasyParallelLibrary are comparing it to the libraries listed below
Sorting:
- ☆219Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆491Updated 8 months ago
- ☆130Updated 11 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆183Updated last month
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆328Updated last week
- oneflow documentation☆69Updated last year
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆285Updated 4 months ago
- Running BERT without Padding☆476Updated 3 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆122Updated 2 years ago
- OneFlow models for benchmarking.☆104Updated last year
- ☆517Updated last month
- Zero Bubble Pipeline Parallelism☆442Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆910Updated 11 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- ☆57Updated 5 years ago
- A lightweight parameter server interface☆87Updated 2 years ago
- LLM training technologies developed by kwai☆67Updated 3 weeks ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- ☆141Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- FastNN provides distributed training examples that use EPL.☆85Updated 3 years ago