ilia10000 / dataset-distillationLinks
Soft-Label Dataset Distillation and Text Dataset Distillation
☆74Updated 2 years ago
Alternatives and similar repositories for dataset-distillation
Users that are interested in dataset-distillation are comparing it to the libraries listed below
Sorting:
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago
- ☆109Updated 2 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆143Updated 5 years ago
- Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training wi…☆54Updated 3 years ago
- Zero-Shot Knowledge Distillation in Deep Networks☆67Updated 3 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆83Updated 3 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago
- ☆96Updated 4 years ago
- [NeurIPS 2020] “ Robust Pre-Training by Adversarial Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang☆114Updated 3 years ago
- ☆177Updated last year
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- [CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jon…☆68Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆88Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆34Updated 3 months ago
- [ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang☆63Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆113Updated last year
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- Official code for the paper "Task2Vec: Task Embedding for Meta-Learning" (https://arxiv.org/abs/1902.03545, ICCV 2019)☆123Updated 2 years ago
- Official implementation of paper Gradient Matching for Domain Generalization☆122Updated 3 years ago
- ☆87Updated 2 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆42Updated 2 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆61Updated 4 years ago
- Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization☆152Updated 3 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Code implementing the experiments described in the paper "On The Power of Curriculum Learning in Training Deep Networks" by Hacohen & Wei…☆114Updated 5 years ago