harrytea / ROOTLinks
ROOT: VLM based System for Indoor Scene Understanding and Beyond
☆39Updated last year
Alternatives and similar repositories for ROOT
Users that are interested in ROOT are comparing it to the libraries listed below
Sorting:
- Visual Spatial Tuning☆171Updated last week
- ☆119Updated 3 weeks ago
- Scaling Spatial Intelligence with Multimodal Foundation Models☆160Updated 3 weeks ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆99Updated last year
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆122Updated 4 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆63Updated 10 months ago
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆73Updated 7 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 8 months ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆62Updated last year
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?☆35Updated 3 weeks ago
- [NeurIPS 2025] EOC-Bench, an innovative benchmark designed to systematically evaluate object-centric embodied cognition in dynamic egocen…☆22Updated 7 months ago
- [NeurIPS 2024] Understanding Multi-Granularity for Open-Vocabulary Part Segmentation☆60Updated last year
- 4D Panoptic Scene Graph Generation (NeurIPS'23 Spotlight)☆117Updated 10 months ago
- An official repo for WACV 2025 paper "LLaVA-SpaceSGG: Visual Instruct Tuning for Open-vocabulary Scene Graph Generation with Enhanced Spa…☆26Updated last year
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆129Updated 8 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆92Updated last year
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆127Updated 6 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆95Updated 11 months ago
- [IEEE TCSVT] Official Pytorch Implementation of CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation.☆47Updated last year
- [CVPR 2024] The official implementation of paper "Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training"☆36Updated last year
- SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding☆60Updated 7 months ago
- Open-Vocabulary Panoptic Segmentation☆27Updated 7 months ago
- ☆58Updated 2 years ago
- Scaling Properties of Diffusion Models For Perceptual Tasks (CVPR 2025)☆44Updated 9 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆167Updated 3 months ago
- ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding☆17Updated 6 months ago
- ☆42Updated last year
- From Flatland to Space (SPAR). Accepted to NeurIPS 2025 Datasets & Benchmarks. A large-scale dataset & benchmark for 3D spatial perceptio…☆77Updated last month
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆57Updated last year
- Unifying 2D and 3D Vision-Language Understanding☆121Updated 6 months ago