AngusDujw / SAF
☆35Updated 2 years ago
Alternatives and similar repositories for SAF:
Users that are interested in SAF are comparing it to the libraries listed below
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- ☆16Updated 2 years ago
- ☆34Updated last year
- ☆11Updated 2 years ago
- ☆57Updated 2 years ago
- PyTorch repository for ICLR 2022 paper (GSAM) which improves generalization (e.g. +3.8% top-1 accuracy on ImageNet with ViT-B/32)☆143Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- ☆18Updated last year
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Bayesian Low-Rank Adaptation for Large Language Models☆30Updated 9 months ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- [CVPR 2024] Friendly Sharpness-Aware Minimization☆32Updated 5 months ago
- Deep Learning & Information Bottleneck☆59Updated last year
- Repo for the paper: "Agree to Disagree: Diversity through Disagreement for Better Transferability"☆36Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆107Updated last year
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆40Updated 2 years ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 8 months ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 3 years ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆29Updated last year
- This is the official implementation of the ICML 2023 paper - Can Forward Gradient Match Backpropagation ?☆12Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Updated 3 years ago
- Transformers trained on Tiny ImageNet☆54Updated 2 years ago
- ☆63Updated last year
- ☆66Updated 4 months ago
- ☆40Updated 2 years ago