BaguaSys / baguaLinks
Bagua Speeds up PyTorch
☆883Updated last year
Alternatives and similar repositories for bagua
Users that are interested in bagua are comparing it to the libraries listed below
Sorting:
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆409Updated 4 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,064Updated last year
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆881Updated this week
- ☆219Updated 2 years ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆766Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆269Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- PyTorch elastic training☆730Updated 3 years ago
- DeepLearning Framework Performance Profiling Toolkit☆292Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 4 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆395Updated this week
- ☆393Updated 2 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆273Updated 2 months ago
- Dive into Deep Learning Compiler☆646Updated 3 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,532Updated 3 months ago
- A GPU performance profiling tool for PyTorch models☆508Updated 4 years ago
- High performance model preprocessing library on PyTorch☆644Updated last year
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,035Updated last month
- Pipeline Parallelism for PyTorch☆780Updated last year
- Collective communications library with various primitives for multi-machine training.☆1,364Updated this week
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆899Updated 9 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,453Updated this week
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.☆1,228Updated this week
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 2 months ago
- common in-memory tensor structure☆1,080Updated last week
- A GPipe implementation in PyTorch☆857Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆994Updated last year
- A performant and modular runtime for TensorFlow☆760Updated last month
- TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.☆988Updated this week