tanganke / petaLinks
Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"
☆21Updated 11 months ago
Alternatives and similar repositories for peta
Users that are interested in peta are comparing it to the libraries listed below
Sorting:
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆46Updated 10 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 10 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 5 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆49Updated 10 months ago
- ☆11Updated last month
- ☆17Updated 7 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆29Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆103Updated 2 years ago
- A curated list of Model Merging methods.☆92Updated 11 months ago
- ☆15Updated last year
- Awesome-Low-Rank-Adaptation☆115Updated 10 months ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆18Updated last month
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆23Updated 6 months ago
- ☆28Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆28Updated 7 months ago
- Task Singular Vectors: Reducing Task Interference in Model Merging. Merge models avoiding task interference through separable models.☆23Updated 3 weeks ago
- Codes for Merging Large Language Models☆33Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 3 weeks ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 9 months ago
- ☆20Updated 9 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆29Updated 5 months ago
- [NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*,…☆13Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆41Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 11 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆28Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 4 months ago