wonchulSon / DGKDLinks
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
☆11Updated 3 years ago
Alternatives and similar repositories for DGKD
Users that are interested in DGKD are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of PS-KD☆88Updated 2 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆97Updated 3 years ago
- ☆33Updated 4 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆88Updated last year
- The official codes of our CVPR-2023 paper: Sharpness-Aware Gradient Matching for Domain Generalization☆75Updated 2 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆65Updated 9 months ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆78Updated 3 months ago
- The offical implement of ImbSAM (Imbalanced-SAM)☆24Updated last year
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated 2 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆18Updated last year
- [ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain☆75Updated 2 years ago
- ☆46Updated 3 years ago
- ☆26Updated last year
- This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2…☆100Updated 3 years ago
- Pytorch Implementation of Task Adaptive Parameter Sharing for Multi-Task Learning (CVPR 2022)☆26Updated last year
- This is the official repo for the CVPR 2021 L2ID paper "Distill on the Go: Online knowledge distillation in self-supervised learning"☆12Updated 3 years ago
- SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization☆48Updated 2 years ago
- The official PyTorch Implementation of "NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation (NeurIPS '22)"☆46Updated last year
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆45Updated 2 years ago
- PyTorch implementation of our CVPR 2024 paper "Unified Entropy Optimization for Open-Set Test-Time Adaptation"☆26Updated 9 months ago
- Awesome Knowledge-Distillation for CV☆88Updated last year
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated last year
- [AAAI 2024] Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with Balanced Normalization☆25Updated 2 months ago
- [CVPR 2023] Robust Test-Time Adaptation in Dynamic Scenarios. https://arxiv.org/abs/2303.13899☆63Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆146Updated 2 years ago
- Code for ICML 2022 paper — Efficient Test-Time Model Adaptation without Forgetting☆125Updated 2 years ago
- Official Code for NeurIPS 2022 Paper: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders☆58Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆50Updated last year
- Code for "Improving Robustness of Vision Transformers by Reducing Sensitivity to Patch Corruptions"☆11Updated last year
- [NeurIPS21] TTT++: When Does Self-supervised Test-time Training Fail or Thrive?☆70Updated 3 years ago