mrflogs / LoRA-ProLinks
Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "
☆143Updated 9 months ago
Alternatives and similar repositories for LoRA-Pro
Users that are interested in LoRA-Pro are comparing it to the libraries listed below
Sorting:
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆165Updated 6 months ago
- ☆125Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- ☆217Updated 2 months ago
- ☆174Updated last year
- ☆195Updated last year
- ☆43Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- ☆152Updated last year
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆61Updated last year
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆146Updated 3 months ago
- [NeurIPS 2024] For paper Parameter Competition Balancing for Model Merging☆48Updated last year
- Codes for Merging Large Language Models☆35Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 6 months ago
- ☆28Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆40Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Updated last year
- A block pruning framework for LLMs.☆27Updated 8 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆234Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆202Updated last year
- Official Pytorch Implementation of "Outlier-weighed Layerwise Sampling for LLM Fine-tuning" by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei …☆35Updated 7 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆152Updated 6 months ago
- ☆141Updated 10 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆101Updated last year
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆100Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆152Updated 6 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago