LOG-postech / ZIPLinks
☆17Updated 2 months ago
Alternatives and similar repositories for ZIP
Users that are interested in ZIP are comparing it to the libraries listed below
Sorting:
- [UAI 2025] Official code for reproducing paper "Critical Influence of Overparameterization on Sharpness-aware Minimization"☆19Updated 8 months ago
- ☆26Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆60Updated 3 years ago
- ☆35Updated 3 years ago
- Coresets via Bilevel Optimization☆68Updated 5 years ago
- ☆39Updated 3 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆38Updated 3 years ago
- ☆111Updated 2 years ago
- Simple CIFAR10 ResNet example with JAX.☆23Updated 4 years ago
- ☆34Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Bayesian Low-Rank Adaptation for Large Language Models☆36Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆52Updated last year
- [NeurIPS 2021] code for "Taxonomizing local versus global structure in neural network loss landscapes" https://arxiv.org/abs/2107.11228☆20Updated 4 years ago
- ☆23Updated 3 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆39Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 3 years ago
- ☆89Updated 3 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆108Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆73Updated last year
- ☆11Updated 3 years ago
- ☆14Updated 4 years ago
- Official PyTorch implementation for Frequency Domain-based Dataset Distillation [NeurIPS 2023]☆30Updated last year
- Code for paper "Out-of-Domain Robustness via Targeted Augmentations"☆14Updated 2 years ago
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆68Updated 2 years ago
- ☆41Updated 3 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 3 years ago
- ☆73Updated last year