☆27Feb 6, 2021Updated 5 years ago
Alternatives and similar repositories for AE-KD
Users that are interested in AE-KD are comparing it to the libraries listed below
Sorting:
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Aug 19, 2020Updated 5 years ago
- (CVPR-Oral 2021) PyTorch implementation of Knowledge Evolution approach and Split-Nets☆83Oct 9, 2021Updated 4 years ago
- This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆63Feb 12, 2022Updated 4 years ago
- Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/…☆12Dec 21, 2022Updated 3 years ago
- Graph Knowledge Distillation☆13Mar 6, 2020Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2021 -- Network Pruning using Adaptive Exemplar Filters☆24Apr 4, 2021Updated 4 years ago
- ACCV2022 Source Code of paper "Feature Decoupled Knowledge Distillation via Spatial Pyramid Pooling"☆12Jul 5, 2023Updated 2 years ago
- Public code release for the paper "Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training"☆11Oct 27, 2025Updated 4 months ago
- Official codebase for our paper "Joslim: Joint Widths and Weights Optimization for Slimmable Neural Networks"☆12Jun 30, 2021Updated 4 years ago
- (IJCAI 2019) Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning☆10Nov 25, 2022Updated 3 years ago
- Deep Neural Network Compression based on Student-Teacher Network☆14Jul 6, 2023Updated 2 years ago
- PyTorch implementation for Channel Distillation☆103Jun 9, 2020Updated 5 years ago
- Knowledge Amalgamation, Multi-teacher KD, Ensemble KD☆12Sep 21, 2021Updated 4 years ago
- released code for the paper: ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding☆31Nov 24, 2020Updated 5 years ago
- AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation☆66Mar 9, 2021Updated 4 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Oct 22, 2020Updated 5 years ago
- (CVPR2022) Official PyTorch Implementation of KDEP. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-eff…☆60Jul 21, 2022Updated 3 years ago
- SHAKE☆18Apr 14, 2023Updated 2 years ago
- Topology Distillation for Recommender System (KDD'21)☆13Sep 2, 2021Updated 4 years ago
- Knowledge Amalgamation Engine☆99Feb 28, 2024Updated 2 years ago
- Switchable Online Knowledge Distillation☆19Oct 27, 2024Updated last year
- pytorch implementation for paper, towards realistic predictors☆17Sep 26, 2018Updated 7 years ago
- Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)☆109Jun 18, 2020Updated 5 years ago
- 'Discretization-Aware Architecture Search' alleviates the discretization gap in one-shot differentiable NAS. DAAS has been accepted by PR…☆20Jul 30, 2021Updated 4 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Nov 22, 2022Updated 3 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆78Jul 29, 2024Updated last year
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆237Dec 15, 2022Updated 3 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Jun 9, 2021Updated 4 years ago
- ☆20Sep 28, 2020Updated 5 years ago
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆19Apr 11, 2022Updated 3 years ago
- spatio-temporal tasks☆16Jul 15, 2024Updated last year
- Reproducing VID in CVPR2019 (on working)☆20Nov 25, 2019Updated 6 years ago
- ☆50Jun 12, 2023Updated 2 years ago
- ☆27Jun 20, 2021Updated 4 years ago
- ☆22Sep 20, 2020Updated 5 years ago
- ☆47Sep 9, 2021Updated 4 years ago
- Tensorflow implementation of "Meta Dropout: Learning to Perturb Latent Features for Generalization" (ICLR 2020)☆27Apr 27, 2020Updated 5 years ago
- Code release for Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (NeurIPS 2019)☆24Nov 29, 2021Updated 4 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Jun 20, 2023Updated 2 years ago