yueb17 / DLTH
☆30Updated 2 years ago
Alternatives and similar repositories for DLTH:
Users that are interested in DLTH are comparing it to the libraries listed below
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)☆83Updated 3 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆28Updated 3 years ago
- In progress.☆63Updated 10 months ago
- Pytorch implementation of our paper accepted by TPAMI 2023 — Lottery Jackpots Exist in Pre-trained Models☆32Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- Code for ICCV23 paper "Automatic network pruning via Hilbert Schmidt independence criterion lasso under information bottleneck principle"☆17Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆58Updated last year
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 2 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆27Updated 2 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆87Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 2 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆31Updated 2 years ago
- code for NASViT☆67Updated 2 years ago
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆19Updated last year
- ☆16Updated 2 years ago
- ☆26Updated 2 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆29Updated 3 years ago
- Code for our ICLR'2022 paper "Generalizing Few-Shot NAS with Gradient Matching"☆22Updated 2 years ago
- Comparison of method "Pruning at initialization prior to training" (Synflow/SNIP/GraSP) in PyTorch☆14Updated 9 months ago