tanganke / petaLinks
Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"
☆24Updated last year
Alternatives and similar repositories for peta
Users that are interested in peta are comparing it to the libraries listed below
Sorting:
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last month
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆100Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆76Updated 10 months ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Updated 3 months ago
- ☆11Updated 6 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆108Updated 2 years ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Updated last year
- ☆18Updated last year
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆32Updated 4 months ago
- A curated list of Model Merging methods.☆96Updated last month
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆27Updated 11 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆25Updated last month
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- Awesome-Low-Rank-Adaptation☆127Updated last year
- ☆80Updated 3 years ago
- A Task of Fictitious Unlearning for VLMs☆28Updated 9 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆17Updated 11 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆152Updated 6 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆52Updated 9 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆53Updated last year
- ☆28Updated last year
- Less is More: High-value Data Selection for Visual Instruction Tuning☆17Updated last year
- ☆89Updated 3 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆38Updated 6 months ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆80Updated 11 months ago