VectorSpaceLab / MegaPairsLinks
MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval
☆181Updated last week
Alternatives and similar repositories for MegaPairs
Users that are interested in MegaPairs are comparing it to the libraries listed below
Sorting:
- Research Code for Multimodal-Cognition Team in Ant Group☆146Updated 2 weeks ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆329Updated last month
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆234Updated 3 months ago
- ☆173Updated 3 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆233Updated 2 months ago
- ☆362Updated 3 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆88Updated this week
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆45Updated 3 weeks ago
- 🔥🔥First-ever hour scale video understanding models☆331Updated this week
- A Survey of Multimodal Retrieval-Augmented Generation☆18Updated last month
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 6 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆59Updated 8 months ago
- ☆34Updated 3 weeks ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆524Updated last month
- ☆269Updated last week
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆78Updated 6 months ago
- ☆68Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆278Updated 8 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆368Updated 2 weeks ago
- ☆79Updated last year
- [ICML'24 Oral] "MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions"☆178Updated 7 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆579Updated 3 weeks ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆123Updated 6 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 2 months ago
- GOT的vLLM加速实现 并结合 MinerU 实现RAG中的pdf 解析☆57Updated 6 months ago
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆126Updated 6 months ago
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆73Updated 2 weeks ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆296Updated 3 months ago
- 本项目使用LLaVA 1.6多模态模型实现以文搜图和以图搜图功能。☆23Updated last year
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆117Updated 7 months ago