tml-epfl / sam-low-rank-features
Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]
☆28Updated last year
Alternatives and similar repositories for sam-low-rank-features:
Users that are interested in sam-low-rank-features are comparing it to the libraries listed below
- ☆34Updated last year
- ☆57Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆11Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated last year
- ☆35Updated 2 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch☆75Updated 2 years ago
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- ☆107Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- ☆39Updated 2 years ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆29Updated last year
- Official Implementation for PlugIn Inversion☆16Updated 3 years ago
- ☆28Updated 9 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆99Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆34Updated 3 weeks ago
- What do we learn from inverting CLIP models?☆54Updated last year
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆78Updated 2 years ago
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated last year
- Predicting Out-of-Distribution Error with the Projection Norm☆17Updated 2 years ago
- ☆22Updated 2 years ago
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆36Updated 2 years ago