Mi-Peng / Sparse-Sharpness-Aware-Minimization
[NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation
☆44Updated last year
Alternatives and similar repositories for Sparse-Sharpness-Aware-Minimization:
Users that are interested in Sparse-Sharpness-Aware-Minimization are comparing it to the libraries listed below
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆72Updated 3 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- ☆23Updated last year
- Respect to the input tensor instead of paramters of NN☆18Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- ☆86Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated last year
- ☆26Updated 4 years ago
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.