VectorSpaceLab / MegaPairsLinks
[ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval
☆203Updated last month
Alternatives and similar repositories for MegaPairs
Users that are interested in MegaPairs are comparing it to the libraries listed below
Sorting:
- Research Code for Multimodal-Cognition Team in Ant Group☆154Updated this week
- ☆173Updated 5 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆244Updated 4 months ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆349Updated 2 months ago
- ☆366Updated 5 months ago
- Repo for "VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforce…☆274Updated last week
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆291Updated this week
- GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning.☆700Updated this week
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 8 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆374Updated 2 months ago
- 🔥🔥First-ever hour scale video understanding models☆490Updated this week
- ☆41Updated last month
- R1-onevision, a visual language model capable of deep CoT reasoning.☆541Updated 3 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆64Updated 10 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆385Updated last month
- ☆437Updated this week
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆80Updated last week
- ☆69Updated 2 years ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆92Updated last month
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆231Updated last month
- [ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"☆225Updated 2 months ago
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆151Updated last year
- ☆80Updated last year
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆308Updated 4 months ago
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated 7 months ago
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆184Updated 2 weeks ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆78Updated 7 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆169Updated last year
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆121Updated last month
- ☆136Updated last year