snuspl / nimbleLinks
Lightweight and Parallel Deep Learning Framework
☆263Updated 2 years ago
Alternatives and similar repositories for nimble
Users that are interested in nimble are comparing it to the libraries listed below
Sorting:
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆200Updated 3 years ago
- Study Group of Deep Learning Compiler☆159Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- A GPU performance profiling tool for PyTorch models☆503Updated 3 years ago
- A Tool for Automatic Parallelization of Deep Learning Training in Distributed Multi-GPU Environments.☆130Updated 3 years ago
- A library to analyze PyTorch traces.☆379Updated this week
- A GPipe implementation in PyTorch☆842Updated 10 months ago
- Python bindings for NVTX☆66Updated last year
- Research and development for optimizing transformers☆126Updated 4 years ago
- ☆390Updated 2 years ago
- ☆143Updated 4 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- TVM stack: exploring the incredible explosion of deep-learning frameworks and how to bring them together☆64Updated 7 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Convert nvprof profiles into about:tracing compatible JSON files☆69Updated 4 years ago
- A tool for examining GPU scheduling behavior.☆83Updated 9 months ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- ☆43Updated last year
- ☆100Updated last year
- The Tensor Algebra SuperOptimizer for Deep Learning☆714Updated 2 years ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- ☆249Updated 10 months ago
- MONeT framework for reducing memory consumption of DNN training☆173Updated 4 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆159Updated 6 years ago
- Fast sparse deep learning on CPUs☆53Updated 2 years ago
- ☆15Updated 3 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year