sundw2014 / DCM
Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)
☆30Updated 4 years ago
Alternatives and similar repositories for DCM:
Users that are interested in DCM are comparing it to the libraries listed below
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆81Updated 3 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆29Updated 5 years ago
- ☆56Updated 3 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆24Updated 4 years ago
- ☆60Updated 4 years ago
- [ICCV2019] Attract or Distract: Exploit the Margin of Open Set☆35Updated 4 years ago
- (NeurIPS 2020 Workshop on SSL) Official Implementation of "MixCo: Mix-up Contrastive Learning for Visual Representation"☆58Updated 2 years ago
- Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"☆40Updated 2 years ago
- Complementary Relation Contrastive Distillation☆14Updated 3 years ago
- This is the code of CVPR'20 paper "Distilling Cross-Task Knowledge via Relationship Matching".☆48Updated 3 years ago
- The code repository for "Learning Classifier Synthesis for Generalized Few-shot Learning"☆22Updated 3 years ago
- ☆26Updated 3 years ago
- [ICLR 2022]: Fast AdvProp☆34Updated 2 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆24Updated 4 years ago
- Code for 'Joint Optimization Framework for Learning with Noisy Labels'☆38Updated 6 years ago
- Code for "Associative alignment for few-shot image classification"- ECCV'2020.☆20Updated 4 years ago
- ☆46Updated 3 years ago
- ☆17Updated 5 years ago
- SKD : Self-supervised Knowledge Distillation for Few-shot Learning☆96Updated last year
- Code for "Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning"☆23Updated 4 years ago
- [CVPR 2021] MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition☆62Updated 2 years ago
- ☆59Updated 2 years ago
- A pytorch implement of scalable neural netowrks.☆23Updated 4 years ago
- ☆14Updated 3 years ago
- ☆45Updated 3 years ago
- Code for "Balanced Knowledge Distillation for Long-tailed Learning"☆27Updated last year
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 5 months ago
- Code for ViTAS_Vision Transformer Architecture Search☆52Updated 3 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆72Updated last year
- Bag of Instances Aggregation Boosts Self-supervised Distillation (ICLR 2022)☆33Updated 2 years ago