xmu-xiaoma666 / Multimodal-Open-O1
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated 5 months ago
Alternatives and similar repositories for Multimodal-Open-O1:
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆65Updated 3 weeks ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆97Updated 2 weeks ago
- [CVPR'25] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆61Updated 3 weeks ago
- ☆66Updated 2 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 4 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 9 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 8 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆96Updated 6 months ago
- Official repository of MMDU dataset☆86Updated 5 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆67Updated last month
- ☆80Updated 10 months ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆44Updated 9 months ago
- minisora-DiT, a DiT reproduction based on XTuner from the open source community MiniSora☆40Updated 11 months ago
- Video dataset dedicated to portrait-mode video recognition.☆44Updated 3 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆47Updated 7 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 3 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆27Updated this week
- LMM solved catastrophic forgetting, AAAI2025☆39Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆96Updated 3 weeks ago
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆63Updated 6 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆75Updated 2 weeks ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆44Updated 2 weeks ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 5 months ago