PzySeere / MetaSpatial
MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, realistic, and adaptive scene generation for applications in the metaverse, AR/VR, and game development.
☆114Updated last week
Alternatives and similar repositories for MetaSpatial:
Users that are interested in MetaSpatial are comparing it to the libraries listed below
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆97Updated 2 weeks ago
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆252Updated 5 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆117Updated 4 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆190Updated 4 months ago
- ☆31Updated 2 weeks ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆161Updated last month
- ☆49Updated 7 months ago
- A paper list for spatial reasoning☆58Updated last month
- Spatial-R1: The first MLLM trained using GRPO for spatial reasoning in videos☆33Updated this week
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆131Updated last year
- [NeurIPS 2024] Official code repository for MSR3D paper☆52Updated 2 weeks ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆71Updated 7 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆242Updated last month
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆128Updated 6 months ago
- The official implementation of The paper "Exploring the Potential of Encoder-free Architectures in 3D LMMs"☆51Updated last month
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆37Updated 5 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆103Updated 5 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆30Updated 9 months ago
- ☆120Updated last year
- ☆35Updated last month
- Evaluate Multimodal LLMs as Embodied Agents☆46Updated 2 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆64Updated last month
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆54Updated last year
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆101Updated last week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆61Updated this week
- ☆29Updated 5 months ago
- ☆126Updated 4 months ago
- ☆159Updated 2 months ago
- ☆46Updated 4 months ago
- Official implementation of "Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness".☆20Updated last month