Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)
☆109Jun 18, 2020Updated 5 years ago
Alternatives and similar repositories for cs-kd
Users that are interested in cs-kd are comparing it to the libraries listed below
Sorting:
- Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)☆103Apr 30, 2024Updated last year
- Role-Wise Data Augmentation for Knowledge Distillation☆19Nov 22, 2022Updated 3 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Jun 9, 2021Updated 4 years ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆28Oct 19, 2022Updated 3 years ago
- ☆128Nov 2, 2020Updated 5 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆238Dec 15, 2022Updated 3 years ago
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularization☆585Feb 15, 2023Updated 3 years ago
- Official PyTorch implementation of PS-KD☆89Aug 5, 2022Updated 3 years ago
- (Unofficial) Data-Distortion Guided Self-Distillation for Deep Neural Networks (AAAI 2019)☆14May 12, 2021Updated 4 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Sep 27, 2022Updated 3 years ago
- A pytorch implement of scalable neural netowrks.☆23Jun 9, 2020Updated 5 years ago
- Graph Knowledge Distillation☆13Mar 6, 2020Updated 5 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆180Jan 29, 2022Updated 4 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆76Jul 6, 2023Updated 2 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Oct 22, 2020Updated 5 years ago
- Code for Active Mixup in 2020 CVPR☆23Jan 11, 2022Updated 4 years ago
- Switchable Online Knowledge Distillation☆19Oct 27, 2024Updated last year
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Aug 19, 2020Updated 5 years ago
- Open-source code for our paper: Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition☆65Apr 1, 2022Updated 3 years ago
- Self-supervised Label Augmentation via Input Transformations (ICML 2020)☆107Nov 28, 2020Updated 5 years ago
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning (ICLR 2020)☆32Oct 25, 2020Updated 5 years ago
- ☆27Feb 6, 2021Updated 5 years ago
- Continual learning using variational prototype replays☆10Oct 17, 2020Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2021 -- Filter Sketch for Network Pruning☆53Feb 11, 2021Updated 5 years ago
- Code for the paper "Training CNNs with Selective Allocation of Channels" (ICML 2019)☆25May 14, 2019Updated 6 years ago
- This is the code of CVPR'20 paper "Distilling Cross-Task Knowledge via Relationship Matching".☆49Feb 7, 2021Updated 5 years ago
- Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019☆414May 17, 2021Updated 4 years ago
- TF-FD☆20Nov 19, 2022Updated 3 years ago
- Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。☆2,654May 30, 2023Updated 2 years ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆110Nov 28, 2022Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆83Dec 30, 2021Updated 4 years ago
- Pytorch implementation of Adversarially Robust Distillation (ARD)☆59May 24, 2019Updated 6 years ago
- ☆15Aug 25, 2020Updated 5 years ago
- ☆61Apr 24, 2020Updated 5 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆119Feb 9, 2021Updated 5 years ago
- Deeply-supervised Knowledge Synergy (CVPR'2019)☆67Jul 25, 2021Updated 4 years ago
- DropNet: Reducing Neural Network Complexity via Iterative Pruning (ICML 2020)☆16Aug 24, 2020Updated 5 years ago
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆19Oct 12, 2021Updated 4 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆25May 25, 2020Updated 5 years ago