MarlonBecker / MSAM
☆18Updated last year
Alternatives and similar repositories for MSAM:
Users that are interested in MSAM are comparing it to the libraries listed below
- ☆34Updated last year
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆27Updated last year
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 3 months ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 3 years ago
- ☆11Updated 2 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆31Updated 2 years ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 6 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆35Updated 2 years ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆49Updated 9 months ago
- ☆57Updated 2 years ago
- ☆15Updated last year
- Recycling diverse models☆44Updated 2 years ago
- ☆16Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆28Updated last year
- Bayesian Low-Rank Adaptation for Large Language Models☆29Updated 7 months ago
- Official code for "In Search of Robust Measures of Generalization" (NeurIPS 2020)☆28Updated 4 years ago
- ☆63Updated 2 months ago
- [ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamics☆18Updated last year
- Deep Learning & Information Bottleneck☆56Updated last year
- [CVPR 2024] Friendly Sharpness-Aware Minimization☆27Updated 3 months ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated last year
- Distilling Model Failures as Directions in Latent Space☆46Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 8 months ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- Pytorch Datasets for Easy-To-Hard☆27Updated last month