NUS-HPC-AI-Lab / LARS-ImageNet-PyTorch
Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional accumulated gradient and NVIDIA DALI dataloader.
☆38Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for LARS-ImageNet-PyTorch
- PyTorch implementation of LAMB for ImageNet/ResNet-50 training☆14Updated 3 years ago
- ☆35Updated 3 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated 11 months ago
- ☆43Updated 4 years ago
- In progress.☆65Updated 7 months ago
- ☆192Updated last year
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆89Updated 2 years ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆35Updated 7 months ago
- ☆39Updated 3 years ago
- ☆46Updated last year
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆88Updated last year
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆48Updated 11 months ago
- Code for ICML 2021 submission☆35Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆43Updated 4 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated 11 months ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆73Updated 4 months ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆103Updated 3 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆84Updated last year
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆57Updated last year
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆40Updated 2 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆40Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆56Updated 2 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆201Updated last year
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆167Updated last year
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆100Updated 4 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 2 years ago
- ☆35Updated 2 years ago
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆55Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆54Updated 3 years ago