thunlp / MigicianLinks
[ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models
☆81Updated 6 months ago
Alternatives and similar repositories for Migician
Users that are interested in Migician are comparing it to the libraries listed below
Sorting:
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆128Updated last year
- ☆90Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆103Updated 5 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆163Updated last month
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆169Updated last year
- ☆123Updated last year
- ☆186Updated 9 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆121Updated last year
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆76Updated 2 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆78Updated 9 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆91Updated this week
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆130Updated last year
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆61Updated 4 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆251Updated last month
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆86Updated last year
- ☆62Updated 2 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆229Updated 3 weeks ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆265Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆62Updated 4 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated 3 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆243Updated 9 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆284Updated last month
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 5 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 3 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆163Updated 11 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆231Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆189Updated 7 months ago