ilia10000 / dataset-distillation
Soft-Label Dataset Distillation and Text Dataset Distillation
☆73Updated last year
Related projects ⓘ
Alternatives and complementary repositories for dataset-distillation
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- ☆102Updated last year
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆80Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆72Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆104Updated last year
- ☆81Updated last year
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆104Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆67Updated 2 years ago
- [NeurIPS 2020] “ Robust Pre-Training by Adversarial Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang☆113Updated 2 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wi…☆52Updated 2 years ago
- Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"☆170Updated 3 years ago
- ☆37Updated last year
- Code and pretrained models for paper: Data-Free Adversarial Distillation☆95Updated last year
- [NeurIPS'20] GradAug: A New Regularization Method for Deep Neural Networks☆93Updated 3 years ago
- Federated posterior averaging implemented in JAX☆49Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated last year
- ☆58Updated last year
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆56Updated 2 years ago
- ☆26Updated 3 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- ☆34Updated 3 months ago
- ☆174Updated 3 months ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆36Updated last year
- [ICLR 2021] Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization☆40Updated 3 years ago
- Implementation of "Dataset Distillation with Attention Labels for fine-tuning BERT" (accepted by ACL2023 main (short))☆21Updated 10 months ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- ☆81Updated 3 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆40Updated 2 years ago
- ZSKD with PyTorch☆30Updated last year
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆139Updated 4 years ago