qizekun / OmniSpatialLinks
OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models
☆78Updated 3 months ago
Alternatives and similar repositories for OmniSpatial
Users that are interested in OmniSpatial are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- From Flatland to Space (SPAR). Accepted to NeurIPS 2025 Datasets & Benchmarks. A large-scale dataset & benchmark for 3D spatial perceptio…☆74Updated last week
- [ICCV'25] Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness☆63Updated 5 months ago
- [NeurIPS 2025] Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆122Updated 2 months ago
- Unifying 2D and 3D Vision-Language Understanding☆119Updated 5 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆42Updated last year
- ☆54Updated last year
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆69Updated 3 months ago
- [NeurIPS 2025] 3DRS: MLLMs Need 3D-Aware Representation Supervision for Scene Understanding☆137Updated last month
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆83Updated last year
- [NeurIPS 2024] Official code repository for MSR3D paper☆69Updated last month
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆126Updated 5 months ago
- 4D Panoptic Scene Graph Generation (NeurIPS'23 Spotlight)☆116Updated 10 months ago
- SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding☆59Updated 6 months ago
- ☆116Updated 2 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆189Updated 7 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆131Updated last year
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆166Updated 3 months ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆61Updated last year
- STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?☆35Updated 6 months ago
- Official code for paper: N3D-VLM: Native 3D Grounding Enables Accurate Spatial Reasoning in Vision-Language Models☆69Updated 3 weeks ago
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆99Updated 11 months ago
- Visual Spatial Tuning☆162Updated last week
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆102Updated 6 months ago
- ☆42Updated 7 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Updated last year
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆65Updated 6 months ago
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'☆190Updated last month
- [3DV 2025] Reason3D: Searching and Reasoning 3D Segmentation via Large Language Model☆116Updated 7 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆31Updated last year