calgaryml / condensed-sparsityLinks
[ICLR 2024] Dynamic Sparse Training with Structured Sparsity
☆21Updated last year
Alternatives and similar repositories for condensed-sparsity
Users that are interested in condensed-sparsity are comparing it to the libraries listed below
Sorting:
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆60Updated 3 years ago
- PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from s…☆36Updated 3 years ago
- ☆219Updated 2 years ago
- ☆17Updated 2 years ago
- ☆234Updated 8 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆113Updated 2 years ago
- Awesome Pruning. ✅ Curated Resources for Neural Network Pruning.☆170Updated last year
- PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks☆760Updated 3 months ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- PyTorch implementation for Vision Transformer[Dosovitskiy, A.(ICLR'21)] modified to obtain over 90% accuracy FROM SCRATCH on CIFAR-10 wit…☆200Updated last year
- In progress.☆66Updated last year
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- [ICML 2025] Official Pytorch code for "SASSHA: Sharpness-aware Adaptive Second-order Optimization With Stable Hessian Approximation"☆19Updated 2 months ago
- ☆227Updated last year
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Updated 3 years ago
- HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models☆21Updated 10 months ago
- Comparison of method "Pruning at initialization prior to training" (Synflow/SNIP/GraSP) in PyTorch☆17Updated last year
- ☆78Updated last year
- ☆36Updated 10 months ago
- ☆281Updated last year
- A curated list of awesome resources combining Transformers with Neural Architecture Search☆268Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- ☆45Updated last year
- ☆193Updated 4 years ago
- ☆36Updated 2 years ago
- ☆32Updated 3 years ago
- Train ImageNet *fast* in 500 lines of code with FFCV☆149Updated last year
- Pretrained models on CIFAR10/100 in PyTorch☆372Updated 5 months ago
- NAS Benchmarks Collection☆14Updated 2 years ago