Haochen-Wang409 / Grasp-Any-RegionLinks
Official implementation of "Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs".
☆95Updated last month
Alternatives and similar repositories for Grasp-Any-Region
Users that are interested in Grasp-Any-Region are comparing it to the libraries listed below
Sorting:
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆37Updated 6 months ago
- [MTI-LLM@NeurIPS 2025] Official implementation of "PyVision: Agentic Vision with Dynamic Tooling."☆141Updated 5 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- ☆64Updated 5 months ago
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexible☆113Updated 4 months ago
- This is the offical repository of InfiniteVL☆62Updated last week
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"☆127Updated last week
- [arXiv 2025] SAGE: Training Smart Any-Horizon Agents for Long Video Reasoning with Reinforcement Learning☆46Updated last week
- [ArXiv 2025] DiffusionVL: Translating Any Autoregressive Models into Diffusion Vision Language Models☆110Updated this week
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- Official codes of "Monet: Reasoning in Latent Visual Space Beyond Image and Language"☆85Updated last week
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆110Updated last month
- An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆157Updated last month
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆76Updated last month
- Official PyTorch implementation of TokenSet.☆127Updated 9 months ago
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆130Updated 3 months ago
- PyTorch implementation of NEPA☆196Updated this week
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆123Updated 5 months ago
- ☆95Updated 6 months ago
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆43Updated last month
- Holistic Evaluation of Multimodal LLMs on Spatial Intelligence☆50Updated this week
- UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning☆151Updated 6 months ago
- ☆56Updated 8 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆68Updated 2 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆76Updated 3 months ago
- Structured Video Comprehension of Real-World Shorts☆227Updated 3 months ago
- ☆140Updated 2 months ago
- Quick Long Video Understanding☆70Updated 2 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆185Updated this week
- ☆80Updated 2 weeks ago