utsaslab / MONeTLinks
MONeT framework for reducing memory consumption of DNN training
☆174Updated 4 years ago
Alternatives and similar repositories for MONeT
Users that are interested in MONeT are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of L2L execution algorithm☆108Updated 2 years ago
- Train ImageNet in 18 minutes on AWS☆133Updated last year
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 5 months ago
- Programmable Neural Network Compression☆149Updated 3 years ago
- ☆57Updated 3 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 5 years ago
- A GPU performance profiling tool for PyTorch models☆508Updated 4 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆140Updated 5 years ago
- ☆108Updated 4 years ago
- Using ideas from product quantization for state-of-the-art neural network compression.☆146Updated 4 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆251Updated 3 years ago
- PyTorch layer-by-layer model profiler☆608Updated 4 years ago
- Labels and other data for the paper "Are we done with ImageNet?"☆195Updated 3 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- Example code showing how to use Nvidia DALI in pytorch, with fallback to torchvision. Contains a few differences to the official Nvidia …☆198Updated 5 years ago
- "Layer-wise Adaptive Rate Scaling" in PyTorch☆87Updated 4 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆306Updated 3 years ago
- ☆69Updated 5 years ago
- ☆62Updated 5 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆329Updated 2 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆199Updated 2 years ago
- ☆143Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- Torch Distributed Experimental☆117Updated last year
- A research library for pytorch-based neural network pruning, compression, and more.☆163Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆483Updated 4 years ago
- A Re-implementation of Fixed-update Initialization☆155Updated 6 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.☆57Updated 2 years ago