xmu-xiaoma666 / Multimodal-Open-O1Links
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated 10 months ago
Alternatives and similar repositories for Multimodal-Open-O1
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
Sorting:
- A Simple Framework of Small-scale LMMs for Video Understanding☆73Updated last month
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 8 months ago
- ☆86Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆32Updated last month
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 9 months ago
- ☆67Updated 2 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆53Updated last week
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆71Updated 5 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆111Updated 11 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆83Updated 3 weeks ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆71Updated 2 months ago
- ☆118Updated last year
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆114Updated this week
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆43Updated last week
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 9 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆47Updated 7 months ago
- ☆73Updated last year
- Official Implementation of OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text Generation☆28Updated 3 weeks ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ☆51Updated last month
- Precision Search through Multi-Style Inputs☆71Updated this week
- Official implement of MIA-DPO☆62Updated 6 months ago
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆58Updated 3 weeks ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆118Updated 3 weeks ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 10 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆164Updated last year
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆35Updated 5 months ago