r-three / matsLinks
☆31Updated last year
Alternatives and similar repositories for mats
Users that are interested in mats are comparing it to the libraries listed below
Sorting:
- ☆78Updated 3 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆106Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- ☆51Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆20Updated last month
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆25Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆36Updated 3 years ago
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆93Updated last week
- Bayesian low-rank adaptation for large language models☆27Updated last year
- ☆13Updated 2 years ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆29Updated last year
- ☆34Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆19Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last year
- ☆51Updated 2 years ago
- Code for the paper "Distinguishing the Knowable from the Unknowable with Language Models"☆10Updated last year
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆35Updated last year
- ☆39Updated 3 years ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆59Updated 3 years ago
- ☆23Updated last year
- ☆63Updated 3 years ago
- ☆37Updated 10 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆76Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- Source code of "Calibrating Large Language Models Using Their Generations Only", ACL2024☆22Updated 11 months ago
- Host CIFAR-10.2 Data Set☆13Updated 4 years ago