thunlp / MigicianLinks
[ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models
☆64Updated last month
Alternatives and similar repositories for Migician
Users that are interested in Migician are comparing it to the libraries listed below
Sorting:
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 7 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆202Updated 2 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆149Updated last month
- Pixel-Level Reasoning Model trained with RL☆140Updated last week
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆164Updated 10 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆91Updated last week
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆54Updated 3 weeks ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 8 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆69Updated 3 months ago
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆75Updated last month
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆51Updated 5 months ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆185Updated 5 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆65Updated last week
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆76Updated 2 weeks ago
- ☆85Updated last year
- ☆76Updated 3 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆131Updated 7 months ago
- ☆48Updated 2 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆103Updated 9 months ago
- ☆32Updated 5 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆104Updated 3 weeks ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆109Updated 2 weeks ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆74Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆164Updated last month
- ☆115Updated 10 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆117Updated 3 months ago
- ☆107Updated 2 months ago
- Official repo for StableLLAVA☆95Updated last year
- ☆173Updated 4 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆143Updated 7 months ago