tanganke / opcm
☆10Updated 2 months ago
Alternatives and similar repositories for opcm:
Users that are interested in opcm are comparing it to the libraries listed below
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 7 months ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆43Updated 5 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆44Updated 6 months ago
- ☆13Updated 8 months ago
- ☆12Updated 2 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆77Updated 5 months ago
- ☆30Updated 9 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆56Updated last month
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆23Updated 3 months ago
- ☆16Updated 10 months ago
- Code for the paper "Mehta, S. V., Patil, D., Chandar, S., & Strubell, E. (2023). An Empirical Investigation of the Role of Pre-training i…☆17Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆14Updated 2 weeks ago
- ☆27Updated last year
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Codes for Merging Large Language Models☆29Updated 8 months ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆17Updated 2 months ago
- LCA-on-the-line (ICML 2024 Oral)☆11Updated 2 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆31Updated last month
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆11Updated last month
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 10 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆30Updated 5 months ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆24Updated 10 months ago
- ☆35Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 10 months ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆34Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆17Updated 2 months ago