ilia10000 / dataset-distillation
Soft-Label Dataset Distillation and Text Dataset Distillation
☆73Updated 2 years ago
Alternatives and similar repositories for dataset-distillation:
Users that are interested in dataset-distillation are comparing it to the libraries listed below
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wi…☆52Updated 3 years ago
- ☆105Updated last year
- ☆85Updated 2 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆55Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- Code for Active Mixup in 2020 CVPR☆22Updated 3 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆58Updated 3 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 2 years ago
- ☆62Updated 3 years ago
- ZSKD with PyTorch☆30Updated last year
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆103Updated 3 years ago
- ☆22Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆93Updated 4 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models☆33Updated 3 months ago
- Code implementing the experiments described in the paper "On The Power of Curriculum Learning in Training Deep Networks" by Hacohen & Wei…☆108Updated 5 years ago
- Smooth Adversarial Training☆67Updated 4 years ago
- Official codebase of the "Rehearsal revealed:The limits and merits of revisiting samples in continual learning" paper.☆27Updated 3 years ago
- [NeurIPS 2020] “ Robust Pre-Training by Adversarial Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang☆115Updated 3 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)☆96Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆68Updated 2 years ago
- ☆57Updated 2 years ago
- Compressing Representations for Self-Supervised Learning☆78Updated 4 years ago