NUS-HPC-AI-Lab / LARS-ImageNet-PyTorch
Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional accumulated gradient and NVIDIA DALI dataloader.
☆38Updated 3 years ago
Alternatives and similar repositories for LARS-ImageNet-PyTorch
Users that are interested in LARS-ImageNet-PyTorch are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of LAMB for ImageNet/ResNet-50 training☆13Updated 4 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Updated 3 years ago
- ☆42Updated 2 years ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- ☆205Updated 2 years ago
- ☆46Updated 5 years ago
- ☆35Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆56Updated 3 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆90Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆119Updated last year
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆89Updated 2 years ago
- ☆51Updated last year
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated last year
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆200Updated 2 years ago
- code for NASViT☆67Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- ☆40Updated 3 years ago
- XNAS: An effective, modular, and flexible Neural Architecture Search (NAS) framework.☆48Updated 2 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆104Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆50Updated 4 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆28Updated 2 years ago
- ☆30Updated 3 years ago