skgyu / SpaceshipNet
Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint
☆17Updated last year
Alternatives and similar repositories for SpaceshipNet:
Users that are interested in SpaceshipNet are comparing it to the libraries listed below
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆52Updated last year
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Updated last year
- This is the source code for Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score (ICML2023).☆37Updated 4 months ago
- ☆63Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- Data-Free Knowledge Distillation☆20Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆71Updated 2 years ago
- Official Implementation of Curriculum of Data Augmentation for Long-tailed Recognition (CUDA) (ICLR'23 Spotlight)☆21Updated last year
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated 7 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆45Updated 2 months ago
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆64Updated 6 months ago
- ☆26Updated 10 months ago
- The official repository of ECCV 2024 paper "Outlier-Aware Test-time Adaptation with Stable Memory Replay"☆18Updated 6 months ago
- Probabilistic lifElong Test-time Adaptation with seLf-training prior (PETAL)☆13Updated last year
- Efficient Dataset Distillation by Representative Matching☆112Updated last year
- Code for CVPR2022 paper "Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency"☆24Updated 2 years ago
- ☆24Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆61Updated last week
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated 8 months ago
- [CVPR 2022 oral] Subspace Adversarial Training☆26Updated last year
- ☆84Updated 2 years ago
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34Updated last year
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆69Updated last year
- ☆17Updated 9 months ago
- [ICCV 2023] A Unified Continual Learning Framework with General Parameter-Efficient Tuning☆76Updated 4 months ago
- ☆26Updated 5 months ago
- This is the official code for "Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better"☆39Updated 3 years ago
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆23Updated 3 months ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆10Updated last year
- ☆52Updated 2 months ago