LiangJian24 / LoRASculptLinks
[CVPR'25 Oral] LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models
☆35Updated last month
Alternatives and similar repositories for LoRASculpt
Users that are interested in LoRASculpt are comparing it to the libraries listed below
Sorting:
- 🌟 手把手教你在论文中插入代码链接☆22Updated 2 months ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆331Updated last month
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆217Updated last year
- Code for Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image Captioning☆15Updated last year
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆89Updated 2 months ago
- [MICCAI 2024] Can LLMs' Tuning Methods Work in Medical Multimodal Domain?☆17Updated last year
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model☆81Updated 2 months ago
- An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPR…☆251Updated 4 months ago
- [WACV 2025] Code for Enhancing Vision-Language Few-Shot Adaptation with Negative Learning☆10Updated 7 months ago
- Code for paper: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models☆33Updated 9 months ago
- Detecting and Evaluating Medical Hallucinations in Large Vision Language Models☆11Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆102Updated last year
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆277Updated 2 years ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆99Updated last month
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆274Updated 5 months ago
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆55Updated 11 months ago
- [AAAI2024] Official implementation of TGP-T☆29Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆82Updated 5 months ago
- ☆46Updated 7 months ago
- ☆52Updated 3 months ago
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆56Updated 3 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆193Updated 2 months ago
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆93Updated 9 months ago
- [CVPR'24 Highlight] Implementation of "Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models"☆15Updated last year
- Easy wrapper for inserting LoRA layers in CLIP.☆40Updated last year
- [ACL'25 Main] Official Implementation of HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Languag…☆32Updated last month
- The official repos of "Knowledge Bridger: Towards Training-Free Missing Modality Completion"☆18Updated 3 months ago
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆44Updated 6 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆48Updated last year
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆682Updated last month