yolky / RFAD
Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"
☆37Updated last year
Alternatives and similar repositories for RFAD:
Users that are interested in RFAD are comparing it to the libraries listed below
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- ☆37Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆110Updated last year
- ☆84Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆55Updated last year
- ☆57Updated 2 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆35Updated 2 years ago
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆15Updated last year
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆53Updated 2 years ago
- ☆34Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆26Updated last year
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- ☆22Updated last year
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated 2 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- ☆34Updated 2 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆55Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆11Updated 2 years ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated last year
- ☆21Updated 2 years ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆42Updated 2 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago