tml-epfl / sam-low-rank-features
Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]
☆28Updated last year
Alternatives and similar repositories for sam-low-rank-features:
Users that are interested in sam-low-rank-features are comparing it to the libraries listed below
- ☆57Updated 2 years ago
- ☆34Updated last year
- ☆35Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆55Updated last year
- ☆34Updated 7 months ago
- ☆11Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆29Updated last year
- ☆37Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆53Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".☆45Updated 2 years ago
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- ☆85Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆55Updated 2 years ago
- Simple CIFAR10 ResNet example with JAX.☆23Updated 3 years ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆95Updated last year
- A simple and efficient baseline for data attribution☆11Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆19Updated 11 months ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- ☆44Updated 2 years ago