This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]
☆579Feb 11, 2026Updated 2 weeks ago
Alternatives and similar repositories for VLM2Vec
Users that are interested in VLM2Vec are comparing it to the libraries listed below
Sorting:
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆77May 23, 2025Updated 9 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆178Oct 1, 2024Updated last year
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs [TMLR2025]☆20Aug 21, 2025Updated 6 months ago
- ☆58Feb 27, 2025Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆274Dec 10, 2025Updated 2 months ago
- LLM2CLIP significantly improves already state-of-the-art CLIP models.☆630Feb 1, 2026Updated last month
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆178Jul 7, 2025Updated 7 months ago
- ☆37Jan 12, 2026Updated last month
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆103Dec 8, 2025Updated 2 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆243Nov 6, 2025Updated 3 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆32Mar 26, 2025Updated 11 months ago
- ☆23Oct 16, 2025Updated 4 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆42Jul 4, 2025Updated 7 months ago
- Toward Universal Multimodal Embedding☆74Aug 1, 2025Updated 7 months ago
- More reliable Video Understanding Evaluation☆14Sep 23, 2025Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆182Jun 5, 2025Updated 8 months ago
- Collection of Composed Image Retrieval (CIR) papers.☆312Dec 22, 2025Updated 2 months ago
- ☆17Mar 5, 2025Updated 11 months ago
- ☆57Aug 16, 2025Updated 6 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆91Nov 15, 2024Updated last year
- ☆67Aug 14, 2025Updated 6 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆21Oct 8, 2024Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆305Sep 11, 2024Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- Code for KaLM-Embedding models☆114Jun 30, 2025Updated 8 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,850Oct 21, 2025Updated 4 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆29Oct 9, 2025Updated 4 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆445May 14, 2025Updated 9 months ago
- A fork to add multimodal model training to open-r1☆1,493Feb 8, 2025Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025☆550Oct 27, 2025Updated 4 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆58Dec 13, 2024Updated last year
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆64Dec 8, 2025Updated 2 months ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆28Aug 15, 2025Updated 6 months ago
- [CVPR25] CoLLM: A Large Language Model for Composed Image Retrieval☆28Mar 26, 2025Updated 11 months ago
- Witness the aha moment of VLM with less than $3.☆4,036May 19, 2025Updated 9 months ago
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,652Dec 4, 2025Updated 2 months ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆773Jan 26, 2026Updated last month