mary-phuong / multiexit-distillation
☆22Updated 4 years ago
Related projects: ⓘ
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 2 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆42Updated 3 years ago
- ZSKD with PyTorch☆30Updated last year
- ☆10Updated 3 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆67Updated last year
- ☆47Updated 4 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated last year
- Codes for Understanding Architectures Learnt by Cell-based Neural Architecture Search☆27Updated 4 years ago
- [CVPR 2021] Contrastive Neural Architecture Search with Neural Architecture Comparators☆39Updated 2 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- Code for Active Mixup in 2020 CVPR☆22Updated 2 years ago
- Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc☆23Updated 3 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 3 years ago
- ☆26Updated 3 years ago
- A PyTorch Implementation of Feature Boosting and Suppression☆16Updated 4 years ago
- Paper and Code for "Curriculum Learning by Optimizing Learning Dynamics" (AISTATS 2021)☆18Updated 3 years ago
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆30Updated 4 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆13Updated 2 years ago
- Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)☆41Updated 3 years ago
- A pytorch implement of scalable neural netowrks.☆23Updated 4 years ago
- ☆9Updated 2 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆80Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- ☆26Updated last year
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆37Updated 2 months ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆38Updated 3 years ago