ZCMax / LLaVA-3DLinks
[ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World
☆344Updated 3 weeks ago
Alternatives and similar repositories for LLaVA-3D
Users that are interested in LLaVA-3D are comparing it to the libraries listed below
Sorting:
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆380Updated 4 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆269Updated 7 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆175Updated 5 months ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆198Updated 3 weeks ago
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆308Updated last year
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction☆293Updated 2 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆129Updated 10 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆276Updated 11 months ago
- [ECCV 2024] ShapeLLM: Universal 3D Object Understanding for Embodied Interaction☆210Updated last year
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'☆154Updated last month
- 3D-R1: Enhancing Reasoning in 3D VLMs for Unified Scene Understanding☆353Updated 2 weeks ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆183Updated 6 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆119Updated 5 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆193Updated 6 months ago
- OmniWorld: A Multi-Domain and Multi-Modal Dataset for 4D World Modeling☆387Updated this week
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆158Updated last month
- [ICCV 2025 & ICCV 2025 RIWM Outstanding Paper] Aether: Geometric-Aware Unified World Modeling☆527Updated 3 weeks ago
- SceneFun3D ToolKit☆159Updated 6 months ago
- ☆168Updated 8 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆465Updated 6 months ago
- Code for the paper: "ODIN: A Single Model for 2D and 3D Segmentation" (CVPR 2024)☆170Updated 3 weeks ago
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆241Updated this week
- [NeurIPS 2024] Official code repository for MSR3D paper☆68Updated 3 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆431Updated 3 months ago
- Unifying 2D and 3D Vision-Language Understanding☆116Updated 3 months ago
- [ICCV'25] Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness☆60Updated 3 months ago
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆96Updated 9 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 3 months ago
- Orient Anything, ICML 2025☆346Updated last month
- ☆98Updated 2 weeks ago