harrytea / ROOTLinks
ROOT: VLM based System for Indoor Scene Understanding and Beyond
☆37Updated 10 months ago
Alternatives and similar repositories for ROOT
Users that are interested in ROOT are comparing it to the libraries listed below
Sorting:
- Visual Spatial Tuning☆146Updated 2 weeks ago
- Scaling Spatial Intelligence with Multimodal Foundation Models☆117Updated last week
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆116Updated last month
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆86Updated 5 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆92Updated last year
- ☆40Updated 4 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆63Updated 8 months ago
- 4D Panoptic Scene Graph Generation (NeurIPS'23 Spotlight)☆116Updated 8 months ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆98Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 8 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 4 months ago
- An official repo for WACV 2025 paper "LLaVA-SpaceSGG: Visual Instruct Tuning for Open-vocabulary Scene Graph Generation with Enhanced Spa…☆25Updated 10 months ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆58Updated last year
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆54Updated last year
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆72Updated 2 months ago
- ACTIVE-O3: Empowering Multimodal Large Language Models with Active Perception via GRPO☆75Updated last week
- SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding☆59Updated 4 months ago
- ☆42Updated last year
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆69Updated 5 months ago
- [CVPR'2025] EntitySAM: Segment Everything in Video☆54Updated 4 months ago
- [CVPR 2025] Test-Time Visual In-Context Tuning☆25Updated 8 months ago
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?☆33Updated 4 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Updated last year
- ☆41Updated 5 months ago
- ☆109Updated 2 years ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆160Updated last month
- ☆58Updated 2 years ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆98Updated 4 months ago
- [NeurIPS 2024] Understanding Multi-Granularity for Open-Vocabulary Part Segmentation☆56Updated 11 months ago
- A list of works on video generation towards world model☆222Updated this week