VectorSpaceLab / MegaPairsLinks
[ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval
☆233Updated last week
Alternatives and similar repositories for MegaPairs
Users that are interested in MegaPairs are comparing it to the libraries listed below
Sorting:
- Research Code for Multimodal-Cognition Team in Ant Group☆169Updated last month
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆390Updated 6 months ago
- ☆186Updated 9 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆256Updated last week
- ☆379Updated 9 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆473Updated this week
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆262Updated last month
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆423Updated 6 months ago
- ☆693Updated 2 weeks ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆569Updated 7 months ago
- Repo for "VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforce…☆391Updated last month
- ☆58Updated 5 months ago
- a toolkit on knowledge distillation for large language models☆195Updated last week
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆76Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆401Updated 6 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆101Updated 5 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆281Updated 2 months ago
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆127Updated 5 months ago
- 🔥🔥First-ever hour scale video understanding models☆572Updated 4 months ago
- ☆102Updated last week
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated 11 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆94Updated 3 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆353Updated 2 weeks ago
- A Survey of Multimodal Retrieval-Augmented Generation☆20Updated last week
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated last year
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆301Updated last year
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆268Updated 9 months ago
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train …☆219Updated 5 months ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆43Updated 11 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆347Updated 2 months ago