Letian2003 / MM_INFLinks
An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08741.
☆29Updated 2 months ago
Alternatives and similar repositories for MM_INF
Users that are interested in MM_INF are comparing it to the libraries listed below
Sorting:
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆46Updated last year
- [ACM MM2025] The official repository for the RealSyn dataset☆36Updated last month
- Lion: Kindling Vision Intelligence within Large Language Models☆52Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆27Updated 3 weeks ago
- Large Multimodal Model☆15Updated last year
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆14Updated 8 months ago
- OpenThinkIMG is an end-to-end open-source framework that empowers Large Vision-Language Models to think with images.☆70Updated last month
- ☆119Updated last year
- ☆91Updated last year
- A subset of YFCC100M. Tools, checking scripts and links of web drive to download datasets(uncompressed).☆19Updated 8 months ago
- ☆87Updated last year
- ☆19Updated last year
- ☆73Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆106Updated last month
- ChineseCLIP using online learning☆13Updated 2 years ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆72Updated this week
- ☆17Updated 2 years ago
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series …☆19Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆73Updated last month
- A huge dataset for Document Visual Question Answering☆19Updated last year
- ☆22Updated last year
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆33Updated 2 months ago
- A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the p…☆14Updated last year
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆41Updated 10 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆73Updated 9 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆83Updated last month
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆136Updated this week