danielm1405 / iso-mergingLinks
[ICML 2025] No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces (official repository)
☆25Updated 2 months ago
Alternatives and similar repositories for iso-merging
Users that are interested in iso-merging are comparing it to the libraries listed below
Sorting:
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated 11 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆90Updated 11 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆105Updated 2 years ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆29Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 7 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆29Updated 2 weeks ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆46Updated last year
- ☆11Updated 2 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆22Updated last year
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆35Updated 6 months ago
- ☆75Updated 3 years ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆42Updated 2 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 2 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆76Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆42Updated 6 months ago
- Task Singular Vectors: Reducing Task Interference in Model Merging. Merge models avoiding task interference through separable models.☆33Updated 2 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆28Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- ☆16Updated last year
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆30Updated 6 months ago
- Data Valuation without Training of a Model, submitted to ICLR'23☆22Updated 2 years ago
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆55Updated last year
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆12Updated 7 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆23Updated 8 months ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated 2 years ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆20Updated 3 weeks ago
- ☆28Updated 6 months ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆19Updated 2 months ago