MINT-SJTU / STI-BenchLinks
STI-Bench : Are MLLMs Ready for Precise Spatial-Temporal World Understanding?
☆33Updated 4 months ago
Alternatives and similar repositories for STI-Bench
Users that are interested in STI-Bench are comparing it to the libraries listed below
Sorting:
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆98Updated 4 months ago
- ☆41Updated 5 months ago
- [ICCV'25] Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness☆62Updated 4 months ago
- From Flatland to Space (SPAR). Accepted to NeurIPS 2025 Datasets & Benchmarks. A large-scale dataset & benchmark for 3D spatial perceptio…☆63Updated last month
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆72Updated 2 months ago
- Visual Spatial Tuning☆146Updated 2 weeks ago
- ☆104Updated 4 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆79Updated 4 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆193Updated 6 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆86Updated 8 months ago
- ☆60Updated 3 weeks ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆88Updated last year
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆77Updated 4 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆127Updated 7 months ago
- ☆100Updated 3 weeks ago
- Cambrian-S: Towards Spatial Supersensing in Video☆375Updated 2 weeks ago
- SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding☆59Updated 4 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆179Updated 5 months ago
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆62Updated 5 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆63Updated 8 months ago
- [NeurIPS 2025] MLLMs Need 3D-Aware Representation Supervision for Scene Understanding☆115Updated 3 weeks ago
- ☆128Updated 8 months ago
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆61Updated 9 months ago
- Awesome paper for multi-modal llm with grounding ability☆19Updated last month
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆115Updated last month
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- [NeurIPS'25] Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding☆62Updated last month
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆103Updated 3 weeks ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆130Updated 3 months ago
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆387Updated 5 months ago