Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning
☆142Jun 30, 2025Updated 8 months ago
Alternatives and similar repositories for Rex-Thinker
Users that are interested in Rex-Thinker are comparing it to the libraries listed below
Sorting:
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆211Oct 15, 2025Updated 4 months ago
- ☆17Mar 5, 2025Updated 11 months ago
- Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos☆304Sep 28, 2025Updated 5 months ago
- Official implementation of Generative Colorization of Structured Mobile Web Pages, WACV 2023.☆22Dec 7, 2023Updated 2 years ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆36Jun 10, 2025Updated 8 months ago
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆73Jun 26, 2025Updated 8 months ago
- Official implementation of the paper "Bind-Your-Avatar: Multi-Talking-Character Video Generation with Dynamic 3D-mask-based Embedding Rou…☆34Sep 25, 2025Updated 5 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- Open Set Semantic Segmentation☆10Dec 23, 2020Updated 5 years ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆529Apr 8, 2024Updated last year
- JoVA: Unified Multimodal Learning for Joint Video-Audio Generation☆30Dec 22, 2025Updated 2 months ago
- DenseShuffleNet for Semantic Segmentation using Caffe for Cityscapes and Mapillary Vistas Dataset☆10Mar 21, 2018Updated 7 years ago
- ☆18Aug 7, 2025Updated 6 months ago
- Sambor: Boosting Segment Anything Model Towards Open-Vocabulary Learning☆32Dec 7, 2023Updated 2 years ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆57Dec 28, 2024Updated last year
- [2026 AAAI] Think Before You Segment: An Object-aware Reasoning Agent for Referring Audio-Visual Segmentation☆19Nov 8, 2025Updated 3 months ago
- Code release for "Language-conditioned Detection Transformer"☆88Jun 17, 2024Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆89May 20, 2025Updated 9 months ago
- [AAAI 2026] Test-Time Reinforcement Learning for GUI Grounding via Region Consistency https://arxiv.org/abs/2508.05615☆61Nov 8, 2025Updated 3 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆331Jul 4, 2025Updated 7 months ago
- ☆19Dec 20, 2025Updated 2 months ago
- Teach-DETR: Better Training DETR with Teachers☆31Mar 18, 2024Updated last year
- ☆32Jul 23, 2022Updated 3 years ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,305Oct 29, 2025Updated 4 months ago
- [CVPR2026] Detect Anything via Next Point Prediction☆1,137Feb 22, 2026Updated last week
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆64Oct 22, 2024Updated last year
- code for affordance-r1☆59Dec 21, 2025Updated 2 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆505Aug 9, 2024Updated last year
- [CVPR 2024] Dynamic Prompt Optimizing for Text-to-Image Generation☆86Jul 13, 2024Updated last year
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think & UnifiedReward-Flex☆723Updated this week
- ☆17Oct 18, 2022Updated 3 years ago
- Vision-Language based Visual Object Tracking☆27Oct 10, 2025Updated 4 months ago
- ☆13Oct 30, 2023Updated 2 years ago
- Multimodal grounded language dataset☆11Dec 14, 2021Updated 4 years ago
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Apr 3, 2024Updated last year
- Offical implementation of "Auto-Regressively Generating Multi-View Consistent Images". (ICCV 2025)☆84Jul 26, 2025Updated 7 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,086Jan 21, 2025Updated last year
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆893Aug 13, 2024Updated last year