justincui03 / dc_benchmarkLinks
☆88Updated 2 years ago
Alternatives and similar repositories for dc_benchmark
Users that are interested in dc_benchmark are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆114Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆22Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆74Updated 3 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- Official implementation of "Private Set Generation with Discriminative Information" (NeurIPS 2022)☆17Updated 2 years ago
- ☆40Updated 3 years ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆71Updated last year
- (Pytorch) Training ResNets on ImageNet-100 data☆63Updated 3 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 2 years ago
- This is the code of ICLR 2022 Oral paper 'Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Au…☆30Updated 2 years ago
- ☆69Updated 2 years ago
- ☆29Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- [ICLR 2023] Trainable Weight Averaging: Efficient Training by Optimizing Historical Solutions☆27Updated 9 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 4 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated 2 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆42Updated 3 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆35Updated last year
- ☆26Updated 2 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆47Updated 3 years ago
- Official implementation of the paper "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay" (AAAI-2…☆18Updated 3 years ago
- ☆23Updated 2 years ago
- ☆28Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆53Updated 2 years ago