PzySeere / MetaSpatial
MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, realistic, and adaptive scene generation for applications in the metaverse, AR/VR, and game development.
☆62Updated last week
Alternatives and similar repositories for MetaSpatial:
Users that are interested in MetaSpatial are comparing it to the libraries listed below
- [NeurIPS 2024] Official code repository for MSR3D paper☆45Updated 3 weeks ago
- ☆48Updated 5 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆38Updated 3 months ago
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆229Updated 3 months ago
- The official implementation of The paper "Exploring the Potential of Encoder-free Architectures in 3D LMMs"☆50Updated last month
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆60Updated 5 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆66Updated 5 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆108Updated 2 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆64Updated 3 weeks ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆96Updated 4 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆44Updated 2 weeks ago
- ☆121Updated 2 months ago
- 4D Panoptic Scene Graph Generation (NeurIPS'23 Spotlight)☆105Updated 2 weeks ago
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆52Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆127Updated 5 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (arXiv 2025)☆24Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆147Updated 3 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆127Updated last year
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆73Updated 7 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆29Updated 8 months ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆144Updated last month
- ☆46Updated 3 months ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆36Updated 9 months ago
- The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆36Updated this week
- ☆115Updated last year
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆44Updated 3 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆38Updated last month
- A paper list for spatial reasoning☆51Updated last month
- IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos☆37Updated 3 months ago
- A collection of vision foundation models unifying understanding and generation.☆47Updated 2 months ago