haolibai / Cross-DistillationLinks
Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.
☆32Updated 5 years ago
Alternatives and similar repositories for Cross-Distillation
Users that are interested in Cross-Distillation are comparing it to the libraries listed below
Sorting:
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Updated 4 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆43Updated 4 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated last year
- [NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: …☆50Updated 3 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- Official code of "NAS acceleration via proxy data", IJCAI21☆10Updated 3 years ago
- ☆47Updated 5 years ago
- PyTorch implementation for GAL.☆56Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2021 -- Filter Sketch for Network Pruning☆53Updated 4 years ago
- A PyTorch Implementation of Feature Boosting and Suppression☆18Updated 4 years ago
- Towards Compact CNNs via Collaborative Compression☆11Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Updated 2 years ago
- Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc☆25Updated 4 years ago
- Global Sparse Momentum SGD for pruning very deep neural networks☆44Updated 2 years ago
- ☆31Updated 5 years ago
- Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)☆43Updated 4 years ago
- [CVPR 2021] Contrastive Neural Architecture Search with Neural Architecture Comparators☆41Updated 3 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 11 months ago
- Codes for AAAI2019 paper: Deep Neural Network Quantization via Layer-Wise Optimization using Limited Training Data☆41Updated 6 years ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆64Updated 3 years ago
- Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.☆65Updated 3 years ago
- How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.☆60Updated 4 years ago
- ☆70Updated 5 years ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- Implementation for NAT.☆58Updated 5 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆37Updated 3 years ago