SilongYong / SQA3DLinks
[ICLR 2023] SQA3D for embodied scene understanding and reasoning
☆142Updated last year
Alternatives and similar repositories for SQA3D
Users that are interested in SQA3D are comparing it to the libraries listed below
Sorting:
- ☆133Updated 2 years ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆62Updated last month
- Code&Data for Grounded 3D-LLM with Referent Tokens☆126Updated 7 months ago
- Official implementation of Language Conditioned Spatial Relation Reasoning for 3D Object Grounding (NeurIPS'22).☆62Updated 2 years ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆79Updated 10 months ago
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆54Updated last year
- ☆41Updated 2 years ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆78Updated last year
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆190Updated 4 months ago
- ☆49Updated 10 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆130Updated 10 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆30Updated last year
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆256Updated 5 months ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆99Updated 2 years ago
- Code for the ECCV22 paper "Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds"☆91Updated 2 years ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆239Updated 8 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆145Updated 2 months ago
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objects☆88Updated last year
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆55Updated last year
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆304Updated last year
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆206Updated last month
- ☆63Updated 2 years ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 5 months ago
- ☆27Updated last year
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆80Updated last year
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆95Updated 6 months ago
- Official implementation of ICCV 2023 paper "3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment"☆213Updated last year
- Code accompanying our ECCV-2020 paper on 3D Neural Listeners.☆132Updated 4 years ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆111Updated 3 months ago
- This is the official implementation for our paper;"LAR:Look Around and Refer".☆30Updated 2 years ago