DingfanChen / Private-SetLinks
Official implementation of "Private Set Generation with Discriminative Information" (NeurIPS 2022)
☆17Updated 2 years ago
Alternatives and similar repositories for Private-Set
Users that are interested in Private-Set are comparing it to the libraries listed below
Sorting:
- ☆87Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆113Updated 2 years ago
- ☆39Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆22Updated 3 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 2 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆74Updated 3 years ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆71Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆66Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- ☆15Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Official repo for PAC-Bayes Information Bottleneck. ICLR 2022.☆49Updated 3 years ago
- "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness" (NeurIPS 2020).☆51Updated 4 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆57Updated 2 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated 2 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 3 years ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆34Updated last year
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 5 years ago
- (Pytorch) Training ResNets on ImageNet-100 data☆64Updated 3 years ago
- ☆33Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- ☆108Updated 3 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆47Updated 3 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆56Updated 3 years ago
- ☆23Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 2 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆82Updated 3 years ago