THUDM / Awesome-Parameter-Efficient-Fine-Tuning-for-Foundation-Models
Parameter-Efficient Fine-Tuning for Foundation Models
☆60Updated last month
Alternatives and similar repositories for Awesome-Parameter-Efficient-Fine-Tuning-for-Foundation-Models:
Users that are interested in Awesome-Parameter-Efficient-Fine-Tuning-for-Foundation-Models are comparing it to the libraries listed below
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆116Updated last month
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆86Updated last year
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆43Updated this week
- ☆40Updated 2 months ago
- ☆100Updated 10 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 3 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆48Updated last month
- Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?☆23Updated last month
- 🎉 The code repository for "Parrot: Multilingual Visual Instruction Tuning" in PyTorch.☆37Updated last week
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆99Updated 2 weeks ago
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆37Updated 6 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆93Updated last week
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆37Updated last year
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆46Updated 5 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆75Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆32Updated 6 months ago
- ☆91Updated last month
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆70Updated 5 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆59Updated this week
- ☆40Updated this week
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆139Updated 3 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆88Updated 3 months ago
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆51Updated 5 months ago
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆44Updated last week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆104Updated last week
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆29Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 2 months ago
- ICLR 2025☆22Updated 2 months ago
- Repository for the NeurIPS 2024 paper "SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up…☆23Updated 5 months ago