ilia10000 / dataset-distillationLinks
Soft-Label Dataset Distillation and Text Dataset Distillation
☆74Updated 3 years ago
Alternatives and similar repositories for dataset-distillation
Users that are interested in dataset-distillation are comparing it to the libraries listed below
Sorting:
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 5 years ago
- ☆109Updated 2 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wi…☆54Updated 4 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆144Updated 5 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Knowledge Extraction with No Observable Data (NeurIPS 2019)☆46Updated 5 years ago
- [NeurIPS 2020] “ Robust Pre-Training by Adversarial Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang☆115Updated 3 years ago
- Code for "Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources". (IC…☆38Updated 5 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆67Updated 3 years ago
- ☆96Updated 4 years ago
- ☆34Updated 6 months ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago
- ☆89Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆73Updated last year
- ☆178Updated last year
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆51Updated 5 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆88Updated 3 years ago
- ☆58Updated 2 years ago
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆99Updated 3 years ago
- [WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"☆28Updated 4 years ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆93Updated 5 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆75Updated 3 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆83Updated 3 years ago