xmu-xiaoma666 / Multimodal-Open-O1Links
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated last year
Alternatives and similar repositories for Multimodal-Open-O1
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] Dynamic-VLM☆26Updated 11 months ago
- ☆90Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆96Updated 5 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆35Updated 5 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆80Updated 5 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆51Updated 3 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆78Updated 8 months ago
- SFT+RL boosts multimodal reasoning☆37Updated 4 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆61Updated 3 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆63Updated last year
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆59Updated 4 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 7 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 2 months ago
- ☆123Updated last year
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆48Updated last year
- ☆61Updated 2 months ago
- ☆37Updated 3 months ago
- ☆75Updated last year
- ☆99Updated 10 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆32Updated 7 months ago
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆73Updated last month
- Precision Search through Multi-Style Inputs☆73Updated 3 months ago
- The Next Step Forward in Multimodal LLM Alignment☆185Updated 6 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆161Updated 3 weeks ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆75Updated last month
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆60Updated 2 months ago