zju-vipa / Fast-DatafreeLinks
[AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation
☆75Updated 3 years ago
Alternatives and similar repositories for Fast-Datafree
Users that are interested in Fast-Datafree are comparing it to the libraries listed below
Sorting:
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- ☆88Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆115Updated 2 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 4 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 2 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Updated 3 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆36Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 3 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆22Updated 3 years ago
- Data-Free Knowledge Distillation☆22Updated 3 years ago
- (Pytorch) Training ResNets on ImageNet-100 data☆63Updated 3 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆22Updated 3 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆42Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- Official implementation of "Private Set Generation with Discriminative Information" (NeurIPS 2022)☆17Updated 2 years ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆45Updated 3 years ago
- ☆23Updated 2 years ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆71Updated last year
- [ICLR 2023] Trainable Weight Averaging: Efficient Training by Optimizing Historical Solutions☆27Updated 9 months ago
- ☆32Updated 3 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆21Updated 2 years ago
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆104Updated last year
- ☆28Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆132Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- pytorch-tiny-imagenet☆187Updated this week