thunlp / MigicianLinks
[ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models
☆78Updated 5 months ago
Alternatives and similar repositories for Migician
Users that are interested in Migician are comparing it to the libraries listed below
Sorting:
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆167Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆152Updated last week
- A Simple Framework of Small-scale LMMs for Video Understanding☆94Updated 4 months ago
- ☆186Updated 8 months ago
- ☆90Updated last year
- ☆119Updated last year
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆238Updated last week
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 9 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆70Updated last month
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆252Updated 2 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆77Updated 7 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 4 months ago
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆70Updated 2 weeks ago
- ☆74Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 3 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated last year
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆130Updated 11 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆238Updated last month
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- ☆60Updated last month
- ☆130Updated 2 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆260Updated 3 weeks ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆238Updated 8 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆243Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆157Updated last year