snuspl / parallaxLinks
A Tool for Automatic Parallelization of Deep Learning Training in Distributed Multi-GPU Environments.
☆132Updated 3 years ago
Alternatives and similar repositories for parallax
Users that are interested in parallax are comparing it to the libraries listed below
Sorting:
- An analytical performance modeling tool for deep neural networks.☆90Updated 4 years ago
- Simple Distributed Deep Learning on TensorFlow☆133Updated 3 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆134Updated 3 years ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.☆57Updated 2 years ago
- Implementing Google's DistBelief paper☆113Updated 2 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆271Updated last month
- DLPack for Tensorflow☆35Updated 5 years ago
- Runtime Tracing Library for TensorFlow☆43Updated 6 years ago
- implement distributed machine learning with Pytorch + OpenMPI☆52Updated 6 years ago
- GPU-specialized parameter server for GPU machine learning.☆101Updated 7 years ago
- Simple Training and Deployment of Fast End-to-End Binary Networks☆157Updated 3 years ago
- This repository contains the results and code for the MLPerf™ Training v0.5 benchmark.☆35Updated 4 months ago
- Machine Learning System☆14Updated 5 years ago
- image to column☆30Updated 11 years ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆71Updated 8 years ago
- A library for syntactically rewriting Python programs, pronounced (sinner).☆69Updated 3 years ago
- Python bindings for NVTX☆66Updated 2 years ago
- Test winograd convolution written in TVM for CUDA and AMDGPU☆41Updated 6 years ago
- Codebase associated with the PyTorch compiler tutorial☆46Updated 6 years ago
- CS294; AI For Systems and Systems For AI☆225Updated 6 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago
- ☆43Updated 2 years ago
- Research and development for optimizing transformers☆129Updated 4 years ago
- DAWNBench: An End-to-End Deep Learning Benchmark and Competition☆262Updated 5 years ago
- ☆144Updated 7 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- Official code for "Writing Distributed Applications with PyTorch", PyTorch Tutorial☆264Updated 2 years ago
- Place for meetup slides☆140Updated 4 years ago