MAC-AutoML / ITPruner
☆23Updated 3 years ago
Alternatives and similar repositories for ITPruner:
Users that are interested in ITPruner are comparing it to the libraries listed below
- Code for ICCV23 paper "Automatic network pruning via Hilbert Schmidt independence criterion lasso under information bottleneck principle"☆18Updated last year
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆25Updated 3 years ago
- ☆25Updated 3 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆25Updated last year
- XNAS: An effective, modular, and flexible Neural Architecture Search (NAS) framework.☆48Updated 2 years ago
- ☆16Updated 2 years ago
- Pytorch implementation of our paper accepted by TPAMI 2023 — Lottery Jackpots Exist in Pre-trained Models☆34Updated last year
- Towards Compact CNNs via Collaborative Compression☆11Updated 2 years ago
- Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc☆24Updated 4 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆30Updated 7 months ago
- Pytorch implementation of our paper accepted by ECCV 2022-- Fine-grained Data Distribution Alignment for Post-Training Quantization☆14Updated 2 years ago
- ☆26Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Updated 3 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆44Updated 3 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 2 years ago
- ☆36Updated 2 years ago
- How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.☆59Updated 3 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆12Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆16Updated 2 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆38Updated 3 years ago
- Official code of "NAS acceleration via proxy data", IJCAI21☆10Updated 2 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆30Updated last year
- Pytorch implementation of our paper accepted by ICCV 2021 -- ReCU: Reviving the Dead Weights in Binary Neural Networks http://arxiv.org/a…☆38Updated 3 years ago
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- Pytorch implementation of TPAMI 2022 -- 1xN Pattern for Pruning Convolutional Neural Networks☆44Updated 2 years ago