snuspl / nimble
Lightweight and Parallel Deep Learning Framework
☆261Updated 2 years ago
Alternatives and similar repositories for nimble:
Users that are interested in nimble are comparing it to the libraries listed below
- Study Group of Deep Learning Compiler☆158Updated 2 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- A Tool for Automatic Parallelization of Deep Learning Training in Distributed Multi-GPU Environments.☆130Updated 3 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 3 years ago
- A library to analyze PyTorch traces.☆367Updated last week
- Python bindings for NVTX☆66Updated last year
- A GPU performance profiling tool for PyTorch models☆505Updated 3 years ago
- ☆142Updated 3 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- Research and development for optimizing transformers☆126Updated 4 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- ☆251Updated 9 months ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 6 years ago
- A tool for examining GPU scheduling behavior.☆81Updated 8 months ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- ☆15Updated 3 years ago
- A GPipe implementation in PyTorch☆837Updated 9 months ago
- A schedule language for large model training☆146Updated 10 months ago
- MONeT framework for reducing memory consumption of DNN training☆173Updated 4 years ago
- ☆389Updated 2 years ago
- This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.☆56Updated last year
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- TVM integration into PyTorch☆452Updated 5 years ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆62Updated 2 years ago
- ☆43Updated last year
- tophub autotvm log collections☆69Updated 2 years ago
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆158Updated 6 years ago