yunncheng / MMRLLinks
[CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models".
☆48Updated this week
Alternatives and similar repositories for MMRL
Users that are interested in MMRL are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆66Updated last month
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆19Updated this week
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆45Updated 2 weeks ago
- cliptrase☆35Updated 9 months ago
- VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆29Updated last week
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆75Updated last year
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆23Updated 6 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆41Updated 3 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆73Updated 11 months ago
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆41Updated 9 months ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆104Updated 3 weeks ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆49Updated 11 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆54Updated 7 months ago
- ☆64Updated 2 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 3 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 8 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆42Updated 8 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆95Updated 3 weeks ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆36Updated 3 months ago
- ☆21Updated last year
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆39Updated 11 months ago
- This repository houses the code for the paper - "The Neglected of VLMs"☆28Updated last month
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆31Updated 2 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆61Updated 2 weeks ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆91Updated 3 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆41Updated last week
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆23Updated last week