xmu-xiaoma666 / Multimodal-Open-O1Links
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated last year
Alternatives and similar repositories for Multimodal-Open-O1
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
Sorting:
- A Simple Framework of Small-scale LMMs for Video Understanding☆94Updated 3 months ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆35Updated 3 months ago
- ☆90Updated last year
- [ICCV 2025] Dynamic-VLM☆25Updated 9 months ago
- SFT+RL boosts multimodal reasoning☆34Updated 3 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 10 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆139Updated 2 months ago
- ☆72Updated 4 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆63Updated 11 months ago
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆57Updated 2 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆116Updated last year
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆54Updated last month
- Official code implementation of Slow Perception:Let's Perceive Geometric Figures Step-by-step☆130Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆49Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆77Updated 4 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated last month
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆92Updated last month
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆35Updated 3 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆77Updated 7 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆32Updated 6 months ago
- ☆119Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆122Updated 3 months ago
- ☆59Updated last month
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- ☆74Updated last year
- MLLM @ Game☆14Updated 4 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated 11 months ago