xmu-xiaoma666 / Multimodal-Open-O1
Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool works locally and aims to create inference chains akin to those used by OpenAI-o1, but with localized processing power.
☆29Updated 7 months ago
Alternatives and similar repositories for Multimodal-Open-O1:
Users that are interested in Multimodal-Open-O1 are comparing it to the libraries listed below
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 4 months ago
- ☆53Updated 3 weeks ago
- ☆83Updated 11 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 6 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆63Updated this week
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆46Updated 4 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆39Updated 2 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 7 months ago
- The official implementation of RAR☆86Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆41Updated 3 weeks ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆94Updated 2 months ago
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 10 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆72Updated 3 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆48Updated last month
- ☆115Updated 9 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆75Updated 3 weeks ago
- ☆75Updated 4 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 10 months ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆46Updated 11 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆24Updated last month
- ☆73Updated last year
- ☆19Updated last year
- Official implement of MIA-DPO☆56Updated 3 months ago
- Official repository of MMDU dataset☆89Updated 7 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆55Updated 9 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Updated last year
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 7 months ago
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated 6 months ago