mrflogs / LoRA-Pro
Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "
☆102Updated 4 months ago
Alternatives and similar repositories for LoRA-Pro:
Users that are interested in LoRA-Pro are comparing it to the libraries listed below
- ☆95Updated 7 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆128Updated 3 weeks ago
- ☆179Updated 4 months ago
- ☆125Updated 7 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆73Updated 11 months ago
- ☆166Updated 7 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆50Updated 7 months ago
- Awesome-Low-Rank-Adaptation☆81Updated 4 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆114Updated 4 months ago
- ☆17Updated 3 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆33Updated 10 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆91Updated this week
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆17Updated last week
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆45Updated this week
- ☆94Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆72Updated 7 months ago
- ☆31Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆34Updated 3 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆50Updated 3 months ago
- Codes for Merging Large Language Models☆29Updated 6 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆49Updated 4 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆89Updated last month
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 10 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆134Updated 6 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆75Updated last week
- ☆62Updated 8 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆165Updated 3 months ago
- Code release for VTW (AAAI 2025) Oral☆32Updated last month
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆71Updated 3 weeks ago