cadurosar / graph_kd
Graph Knowledge Distillation
☆13Updated 4 years ago
Alternatives and similar repositories for graph_kd:
Users that are interested in graph_kd are comparing it to the libraries listed below
- NeurIPS 2019 : Learning to Propagate for Graph Meta-Learning☆36Updated 5 years ago
- [CVPR 2020] Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition from a Domain Adaptation Perspective☆24Updated 4 years ago
- Source code for 'Knowledge Distillation via Instance Relationship Graph'☆29Updated 5 years ago
- A Generic Multi-classifier Paradigm forIncremental Learning☆11Updated 4 years ago
- ☆26Updated 3 years ago
- Continual learning using variational prototype replays☆9Updated 4 years ago
- ☆59Updated 2 years ago
- Adjust Decision Boundary for Class Imbalanced Learning☆19Updated 4 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- Code for "Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning"☆23Updated 4 years ago
- Source code for "Distilling Knowledge From Graph Convolutional Networks", CVPR'20☆58Updated last year
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆72Updated last year
- Code release for Catastrophic Forgetting Meets Negative Transfer: Batch Spectral Shrinkage for Safe Transfer Learning (NeurIPS 2019)☆24Updated 3 years ago
- The source code of our ACM MM 2019 paper "TGG: Transferable Graph Generation for Zero-shot and Few-shot Learning".☆25Updated 4 years ago
- Code for "Balanced Knowledge Distillation for Long-tailed Learning"☆27Updated last year
- code for "Training Interpretable Convolutional NeuralNetworks by Differentiating Class-specific Filters"☆27Updated last year
- Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019☆27Updated 4 years ago
- Code Release for Learning to Adapt to Evolving Domains☆30Updated 3 years ago
- ☆60Updated 4 years ago
- Code release for "Dynamic Domain Adaptation for Efficient Inference" (CVPR 2021)☆33Updated 3 years ago
- Triplet Loss for Knowledge Distillation☆17Updated 2 years ago
- official PyTorch implementation of paper "Continual Meta-Learning with Bayesian Graph Neural Networks" (AAAI2020)☆61Updated 4 years ago
- Official implementation for: "Multi-Objective Interpolation Training for Robustness to Label Noise"☆39Updated 2 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 3 years ago
- ☆29Updated 3 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆13Updated 4 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆24Updated 4 years ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 6 months ago
- infinite mixture prototypes for few-shot learning☆50Updated 2 years ago
- Implementation "Adapting Auxiliary Losses Using Gradient Similarity" article☆32Updated 5 years ago