SJLeo / FFSDLinks
Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation
☆30Updated 4 years ago
Alternatives and similar repositories for FFSD
Users that are interested in FFSD are comparing it to the libraries listed below
Sorting:
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆65Updated 4 years ago
- ☆57Updated 4 years ago
- ☆20Updated 2 years ago
- ☆27Updated 3 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Updated 3 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆27Updated 5 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆50Updated 3 years ago
- Code for Active Mixup in 2020 CVPR☆23Updated 3 years ago
- Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/…☆12Updated 3 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Updated 5 years ago
- Codes for paper "Few Shot Network Compression via Cross Distillation", AAAI 2020.☆31Updated 5 years ago
- Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)☆43Updated 4 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆75Updated 2 years ago
- [ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization☆42Updated 4 years ago
- released code for the paper: ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding☆31Updated 5 years ago
- ☆23Updated 6 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 4 years ago
- ☆34Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆27Updated 4 years ago
- Auto-Prox-AAAI24☆14Updated last year
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆17Updated 5 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆25Updated 5 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 3 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Updated 4 years ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆27Updated 3 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 5 years ago
- ☆31Updated 5 years ago