alibaba / EasyParallelLibraryLinks
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
☆271Updated 2 years ago
Alternatives and similar repositories for EasyParallelLibrary
Users that are interested in EasyParallelLibrary are comparing it to the libraries listed below
Sorting:
- ☆219Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆160Updated last year
- DeepLearning Framework Performance Profiling Toolkit☆296Updated 3 years ago
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- ☆130Updated last year
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆192Updated 3 months ago
- Running BERT without Padding☆478Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- oneflow documentation☆69Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Updated last month
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 2 weeks ago
- OneFlow models for benchmarking.☆104Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆58Updated 5 years ago
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- ☆141Updated last year
- LLM training technologies developed by kwai☆70Updated 2 weeks ago
- ☆56Updated 2 years ago
- A lightweight parameter server interface☆87Updated 3 years ago
- ☆79Updated 2 years ago
- ☆523Updated 2 weeks ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆916Updated last year
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago