mrflogs / LoRA-ProLinks
Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "
☆117Updated last month
Alternatives and similar repositories for LoRA-Pro
Users that are interested in LoRA-Pro are comparing it to the libraries listed below
Sorting:
- ☆105Updated 11 months ago
- ☆198Updated 7 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆141Updated 3 months ago
- ☆138Updated 10 months ago
- Codes for Merging Large Language Models☆31Updated 10 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆77Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- ☆174Updated 10 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 7 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆115Updated last month
- Dataset pruning for ImageNet and LAION-2B.☆79Updated 11 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆162Updated 9 months ago
- ☆18Updated 6 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆55Updated 10 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆37Updated 7 months ago
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆82Updated 7 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆65Updated this week
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆38Updated last year
- ☆24Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆19Updated 3 months ago
- [Preprint 2025] Thinkless: LLM Learns When to Think☆125Updated this week
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆56Updated 2 months ago
- ☆37Updated 10 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- A curated list of Model Merging methods.☆92Updated 8 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆203Updated 6 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 8 months ago