zju-vipa / Fast-Datafree
[AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation
☆68Updated 2 years ago
Alternatives and similar repositories for Fast-Datafree:
Users that are interested in Fast-Datafree are comparing it to the libraries listed below
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆71Updated 2 years ago
- ☆85Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- Data-Free Knowledge Distillation☆20Updated 2 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 10 months ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆95Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- ☆14Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated 2 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆30Updated 2 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆64Updated 2 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆17Updated last year
- ☆30Updated 3 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆40Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Code for the paper "On the Adversarial Robustness of Visual Transformers"☆56Updated 3 years ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆63Updated 3 weeks ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆125Updated 4 months ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆69Updated last year
- ☆21Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆21Updated 2 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- ☆27Updated 11 months ago
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆26Updated 2 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆30Updated 2 years ago