gstoica27 / KnOTSLinks
Model Merging with SVD to Tie the KnOTS [ICLR 2025]
☆82Updated 9 months ago
Alternatives and similar repositories for KnOTS
Users that are interested in KnOTS are comparing it to the libraries listed below
Sorting:
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Updated last year
- Data distillation benchmark☆71Updated 6 months ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆79Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆74Updated 10 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆172Updated 3 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 8 months ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆54Updated 8 months ago
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆32Updated last year
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆24Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆142Updated 9 months ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆30Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆190Updated last year
- ☆140Updated 2 months ago
- ☆36Updated 9 months ago
- ☆191Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆38Updated last year
- What do we learn from inverting CLIP models?☆57Updated last year
- Model Stock: All we need is just a few fine-tuned models☆128Updated 4 months ago
- Matryoshka Multimodal Models☆121Updated 11 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 11 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆45Updated 5 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆40Updated last year
- Code for T-MARS data filtering☆35Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆108Updated 2 years ago
- Code for "Merging Text Transformers from Different Initializations"☆20Updated 11 months ago