xmu-xiaoma666 / Multimodal-Open-O1Links
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated last year
Alternatives and similar repositories for Multimodal-Open-O1
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
Sorting:
- ☆90Updated last year
- SFT+RL boosts multimodal reasoning☆42Updated 6 months ago
- [ICCV 2025] Dynamic-VLM☆28Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆108Updated 7 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆128Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆87Updated 7 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆121Updated last year
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆80Updated 10 months ago
- ☆124Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆68Updated last year
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆52Updated 5 months ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆36Updated 7 months ago
- ☆62Updated 4 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- ☆41Updated 5 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 4 months ago
- The Next Step Forward in Multimodal LLM Alignment☆193Updated 8 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- ☆56Updated 8 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆45Updated 9 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆172Updated this week
- ☆74Updated 7 months ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆57Updated 10 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆96Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆169Updated last year
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆98Updated last month
- Precision Search through Multi-Style Inputs☆73Updated 5 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆164Updated last year