yyyybq / Awesome-Spatial-ReasoningLinks
A paper list for spatial reasoning
☆94Updated 2 weeks ago
Alternatives and similar repositories for Awesome-Spatial-Reasoning
Users that are interested in Awesome-Spatial-Reasoning are comparing it to the libraries listed below
Sorting:
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆133Updated last month
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆63Updated 2 weeks ago
- [NeurIPS'24] SpatialEval: a benchmark to evaluate spatial reasoning abilities of MLLMs and LLMs☆41Updated 5 months ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆50Updated this week
- ☆86Updated 3 months ago
- ☆37Updated 2 weeks ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆61Updated 2 weeks ago
- Official implementation of "Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness".☆29Updated 2 weeks ago
- ☆47Updated 3 weeks ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆211Updated 6 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 2 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆22Updated 4 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆29Updated 7 months ago
- A python script for downloading huggingface datasets and models.☆19Updated 2 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆55Updated 3 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆19Updated this week
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆117Updated 3 weeks ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆65Updated 2 months ago
- Accepted by CVPR 2024☆33Updated last year
- The official implementation of The paper "Exploring the Potential of Encoder-free Architectures in 3D LMMs"☆53Updated last month
- A collection of vision foundation models unifying understanding and generation.☆55Updated 5 months ago
- ☆53Updated 2 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆64Updated 3 weeks ago
- ☆122Updated 4 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆29Updated last month
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆85Updated 9 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆64Updated 3 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆239Updated this week
- ☆24Updated 4 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆68Updated 3 months ago