jianweif / OptimalGradCheckpointingLinks
☆41Updated 3 years ago
Alternatives and similar repositories for OptimalGradCheckpointing
Users that are interested in OptimalGradCheckpointing are comparing it to the libraries listed below
Sorting:
- Code for ICML 2021 submission☆34Updated 4 years ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆28Updated 4 years ago
- ☆43Updated last year
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Updated 2 years ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- ☆10Updated 3 years ago
- pytorch-profiler☆51Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆200Updated 2 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆31Updated last year
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆157Updated last year
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆24Updated 2 years ago
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Updated 3 years ago
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- ☆42Updated 2 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆41Updated 2 years ago
- ☆36Updated 2 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆37Updated 3 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 10 months ago
- ☆205Updated 2 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆52Updated 2 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆71Updated 2 years ago
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆98Updated 3 years ago
- ☆35Updated 5 years ago