Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning
☆146Jun 30, 2025Updated 8 months ago
Alternatives and similar repositories for Rex-Thinker
Users that are interested in Rex-Thinker are comparing it to the libraries listed below
Sorting:
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆178Oct 15, 2025Updated 5 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆211Oct 15, 2025Updated 5 months ago
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆60Dec 28, 2024Updated last year
- Cluster Document for IIL@HIT☆20Apr 5, 2023Updated 2 years ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆36Jun 10, 2025Updated 9 months ago
- (CVPR 26 Findings) Official implementation of the paper "Bind-Your-Avatar: Multi-Talking-Character Video Generation with Dynamic 3D-mask-…☆34Sep 25, 2025Updated 5 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆73Mar 18, 2025Updated last year
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆46Nov 8, 2025Updated 4 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆531Apr 8, 2024Updated last year
- This is the implement of the paper "RSRefSeg 2: Decoupling Referring Remote Sensing Image Segmentation with Foundation Models"☆28Jul 23, 2025Updated 8 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,317Oct 29, 2025Updated 4 months ago
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think & UnifiedReward-Flex☆744Updated this week
- [MM'2024] Official release of RFUND introduced in the MM'2024 paper "PEneo: Unifying Line Extraction, Line Grouping, and Entity Linking f…☆20Dec 4, 2024Updated last year
- The official implementation of OmniTrack: Omnidirectional Multi-Object Tracking (CVPR 2025)☆109Aug 27, 2025Updated 6 months ago
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Apr 3, 2024Updated last year
- This is a project on visual spatial reasoning tasks-SIBench☆25Jan 12, 2026Updated 2 months ago
- ☆41Updated this week
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆64Oct 22, 2024Updated last year
- [CVPR2026] Detect Anything via Next Point Prediction☆1,219Feb 22, 2026Updated last month
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Sep 19, 2023Updated 2 years ago
- ☆18Aug 7, 2025Updated 7 months ago
- ☆13Oct 30, 2023Updated 2 years ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆949Aug 5, 2025Updated 7 months ago
- code for affordance-r1☆65Dec 21, 2025Updated 3 months ago
- ☆91Mar 9, 2026Updated last week
- Dreambooth (LoRA) with well-organized code structure. Naive adaptation from 🤗Diffusers.☆17May 18, 2023Updated 2 years ago
- Sambor: Boosting Segment Anything Model Towards Open-Vocabulary Learning☆32Dec 7, 2023Updated 2 years ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆330Jul 4, 2025Updated 8 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆89May 20, 2025Updated 10 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,029Aug 4, 2025Updated 7 months ago
- Official Code for "Painting with Words: Elevating Detailed Image Captioning with Benchmark and Alignment Learning" (ICLR 2025)☆12Mar 6, 2025Updated last year
- ☆30Dec 12, 2024Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆506Aug 9, 2024Updated last year
- JoVA: Unified Multimodal Learning for Joint Video-Audio Generation☆30Dec 22, 2025Updated 3 months ago
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in th…☆69Jul 22, 2022Updated 3 years ago
- A fork to add multimodal model training to open-r1☆1,507Feb 8, 2025Updated last year
- ☆35Jan 9, 2026Updated 2 months ago
- [NeurIPS 2025] Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆92Jul 27, 2025Updated 7 months ago