jianweif / OptimalGradCheckpointingLinks
☆41Updated 4 years ago
Alternatives and similar repositories for OptimalGradCheckpointing
Users that are interested in OptimalGradCheckpointing are comparing it to the libraries listed below
Sorting:
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆199Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 4 years ago
- ☆10Updated 3 years ago
- pytorch-profiler☆51Updated 2 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated 2 years ago
- ☆43Updated last year
- ☆221Updated 2 years ago
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆100Updated 4 years ago
- ☆159Updated 2 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 6 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆29Updated 5 years ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 4 years ago
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆24Updated 2 years ago
- ☆69Updated 5 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆137Updated 3 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆105Updated 4 years ago
- Python pdb for multiple processes☆66Updated 6 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆41Updated 4 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- ☆243Updated 3 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Updated 2 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆89Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 3 years ago
- Simple Training and Deployment of Fast End-to-End Binary Networks☆159Updated 3 years ago
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆114Updated 2 years ago
- ☆43Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 5 months ago