tml-epfl / sam-low-rank-featuresLinks
Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]
☆28Updated last year
Alternatives and similar repositories for sam-low-rank-features
Users that are interested in sam-low-rank-features are comparing it to the libraries listed below
Sorting:
- ☆34Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated 2 years ago
- ☆58Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆57Updated 2 years ago
- ☆107Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆48Updated last year
- ☆31Updated last year
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆79Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆113Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆22Updated 3 years ago
- ☆45Updated 2 years ago
- ☆11Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆48Updated last year
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch☆76Updated 2 years ago
- ☆23Updated 3 years ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆24Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated 2 years ago
- ☆36Updated 2 years ago
- What do we learn from inverting CLIP models?☆55Updated last year
- Host CIFAR-10.2 Data Set☆13Updated 3 years ago
- ☆86Updated 2 years ago