mrflogs / LoRA-Pro
Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "
☆116Updated last month
Alternatives and similar repositories for LoRA-Pro
Users that are interested in LoRA-Pro are comparing it to the libraries listed below
Sorting:
- ☆101Updated 10 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆141Updated 3 months ago
- ☆194Updated 6 months ago
- ☆174Updated 10 months ago
- ☆134Updated 9 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆199Updated 5 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆55Updated 9 months ago
- Awesome-Low-Rank-Adaptation☆95Updated 7 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆76Updated last year
- ☆24Updated 11 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 7 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆89Updated 3 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆80Updated 6 months ago
- Dataset pruning for ImageNet and LAION-2B.☆79Updated 10 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆36Updated 5 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆97Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆58Updated 2 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆107Updated 3 weeks ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆19Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆141Updated 2 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 3 months ago
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆31Updated 11 months ago
- ☆18Updated 5 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆37Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆161Updated 8 months ago
- Efficient Mixture of Experts for LLM Paper List☆64Updated 5 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆54Updated 4 months ago
- Codes for Merging Large Language Models☆29Updated 9 months ago