Letian2003 / MM_INFLinks
An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08741.
☆30Updated 3 months ago
Alternatives and similar repositories for MM_INF
Users that are interested in MM_INF are comparing it to the libraries listed below
Sorting:
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆48Updated last year
- [ACM MM2025] The official repository for the RealSyn dataset☆37Updated 2 months ago
- A subset of YFCC100M. Tools, checking scripts and links of web drive to download datasets(uncompressed).☆20Updated 10 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆20Updated this week
- OpenThinkIMG is an end-to-end open-source framework that empowers Large Vision-Language Models to think with images.☆81Updated 2 months ago
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆15Updated 9 months ago
- Large Multimodal Model☆15Updated last year
- ☆19Updated last year
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆79Updated last month
- ☆87Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆29Updated 2 months ago
- ☆74Updated last year
- Lion: Kindling Vision Intelligence within Large Language Models☆51Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the p…☆14Updated last year
- A huge dataset for Document Visual Question Answering☆19Updated last year
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series …☆19Updated last year
- ☆119Updated last year
- ☆91Updated last year
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆41Updated 11 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆90Updated last month
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆35Updated 3 months ago
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆62Updated 4 months ago
- ☆16Updated 2 years ago
- Chinese CLIP models with SOTA performance.☆58Updated 2 years ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 10 months ago
- ChineseCLIP using online learning☆13Updated 2 years ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆22Updated last year