cadurosar / graph_kdLinks
Graph Knowledge Distillation
☆13Updated 5 years ago
Alternatives and similar repositories for graph_kd
Users that are interested in graph_kd are comparing it to the libraries listed below
Sorting:
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆74Updated 2 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆30Updated 6 years ago
- NeurIPS 2019 : Learning to Propagate for Graph Meta-Learning☆36Updated 5 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 5 years ago
- Adjust Decision Boundary for Class Imbalanced Learning☆19Updated 5 years ago
- Code release for Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (NeurIPS 2019)☆24Updated 3 years ago
- PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.☆29Updated 4 years ago
- Code for CVPR 2019 paper Label Propagation for Deep Semi-supervised Learning☆116Updated 5 years ago
- Implementation "Adapting Auxiliary Losses Using Gradient Similarity" article☆32Updated 6 years ago
- [CVPR 2020] Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective☆24Updated 4 years ago
- Chainer Implementation of TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning☆56Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆29Updated 3 years ago
- ICML'19 How does Disagreement Help Generalization against Label Corruption?☆87Updated 6 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆25Updated 5 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆29Updated 4 years ago
- Code release for "Transferable Normalization: Towards Improving Transferability of Deep Neural Networks" (NeurIPS 2019)☆79Updated 4 years ago
- Continual learning using variational prototype replays☆9Updated 4 years ago
- SKD : Self-supervised Knowledge Distillation for Few-shot Learning☆98Updated last year
- Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in fu…☆53Updated 4 years ago
- This is the code of CVPR'20 paper "Distilling Cross-Task Knowledge via Relationship Matching".☆49Updated 4 years ago
- A Generic Multi-classifier Paradigm forIncremental Learning☆11Updated 4 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆15Updated 4 years ago
- Reproducing experimental results of OOD-by-MCD [Yu and Aizawa et al. ICCV 2019]☆30Updated 5 years ago
- ☆27Updated 4 years ago
- Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019☆27Updated 5 years ago
- [AAAI 2021] Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning☆139Updated 4 years ago
- ☆61Updated 3 years ago
- infinite mixture prototypes for few-shot learning☆51Updated 2 years ago
- Code for the paper "Training CNNs with Selective Allocation of Channels" (ICML 2019)☆25Updated 6 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Updated 4 years ago