tanganke / subspace_fusion
Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"
☆12Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for subspace_fusion
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆52Updated 3 weeks ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆14Updated 2 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆47Updated last month
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆36Updated last month
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆21Updated 7 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆23Updated last week
- ☆17Updated last year
- ☆16Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆33Updated last month
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- ☆21Updated last month
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆60Updated last month
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆62Updated last month
- Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆10Updated 5 months ago
- ☆44Updated 10 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆29Updated 4 months ago
- ☆28Updated 5 months ago
- ☆38Updated last year
- ☆31Updated last year
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆16Updated 4 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆15Updated 5 months ago
- ☆26Updated 3 weeks ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆29Updated last year
- ☆36Updated 4 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆28Updated last month
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆10Updated 2 months ago
- Codebase for decoding compressed trust.☆20Updated 6 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆16Updated 2 months ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆14Updated 3 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆19Updated 7 months ago