xmu-xiaoma666 / Multimodal-Open-O1
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated 6 months ago
Alternatives and similar repositories for Multimodal-Open-O1:
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 4 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated last month
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 5 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆69Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆48Updated 3 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆91Updated this week
- Unifying Visual Understanding and Generation with Dual Visual Vocabularies 🌈☆37Updated 3 weeks ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 6 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 10 months ago
- ☆15Updated 3 weeks ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆42Updated last month
- ☆66Updated last week
- ☆82Updated 11 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆70Updated this week
- ☆40Updated last week
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Updated last year
- ☆71Updated 3 months ago
- Official implement of MIA-DPO☆54Updated 2 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆28Updated 2 weeks ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆30Updated last month
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- Explore the Limits of Omni-modal Pretraining at Scale☆97Updated 7 months ago
- LMM solved catastrophic forgetting, AAAI2025☆40Updated this week
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆39Updated 6 months ago
- ☆115Updated 8 months ago
- ☆44Updated last month
- ☆83Updated last year
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆41Updated 3 weeks ago