wonchulSon / DGKD
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
☆11Updated 3 years ago
Alternatives and similar repositories for DGKD:
Users that are interested in DGKD are comparing it to the libraries listed below
- Official PyTorch implementation of PS-KD☆86Updated 2 years ago
- The official codes of our CVPR-2023 paper: Sharpness-Aware Gradient Matching for Domain Generalization☆75Updated last year
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆63Updated 7 months ago
- Probabilistic lifElong Test-time Adaptation with seLf-training prior (PETAL)☆13Updated last year
- This is the official repo for the CVPR 2021 L2ID paper "Distill on the Go: Online knowledge distillation in self-supervised learning"☆13Updated 3 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆17Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆77Updated last month
- Awesome Knowledge-Distillation for CV☆83Updated 11 months ago
- ☆26Updated 2 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆94Updated 2 years ago
- Official repo for the WACV 2023 paper: Federated Domain Generalization for Image Recognition via Cross-Client Style Transfer.☆29Updated last year
- ☆63Updated last year
- Code for our CVPR 2022 workshop paper "Towards Exemplar-Free Continual Learning in Vision Transformers"☆22Updated 2 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- Official code for the CVPR23 paper: "Improved Test-Time Adaptation for Domain Generalization"☆33Updated 3 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- ☆26Updated last year
- Official code for Scale Decoupled Distillation☆40Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆49Updated last year
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆106Updated 2 years ago
- The official PyTorch Implementation of "NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation (NeurIPS '22)"☆44Updated last year
- ☆32Updated 3 years ago
- [ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain☆75Updated 2 years ago
- [CVPR 2023] Robust Test-Time Adaptation in Dynamic Scenarios. https://arxiv.org/abs/2303.13899☆60Updated last year
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆53Updated 6 months ago
- [ICLR 2023] The Devil is in the Wrongly-classified Samples: Towards Unified Open-set Recognition☆30Updated 2 years ago
- This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2…☆97Updated 3 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 4 years ago
- ☆26Updated 2 years ago