graldij / transformer-fusionLinks
Official repository of the "Transformer Fusion with Optimal Transport" paper, published as a conference paper at ICLR 2024.
☆30Updated last year
Alternatives and similar repositories for transformer-fusion
Users that are interested in transformer-fusion are comparing it to the libraries listed below
Sorting:
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆33Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated 3 weeks ago
- (ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation☆43Updated 2 months ago
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆46Updated last year
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆32Updated 3 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆24Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆100Updated last year
- ☆31Updated last month
- Pytorch implementation of ICML-2024 "Navigating Complexity: Toward Lossless Graph Condensation via Expanding Window Matching"☆26Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- ☆152Updated last year
- Official code implementation for the paper "Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Expl…☆12Updated 9 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆74Updated 10 months ago
- ☆17Updated 9 months ago
- Bayesian Low-Rank Adaptation of LLMs: BLoB [NeurIPS 2024] and TFB [NeurIPS 2025]☆31Updated 3 months ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆26Updated 11 months ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Updated 3 months ago
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆13Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- Official code for ICLR 2023 paper "ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond "☆35Updated 2 years ago
- [NeurIPS 2025] Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models