SilongYong / SQA3DView external linksLinks
[ICLR 2023] SQA3D for embodied scene understanding and reasoning
☆156Oct 13, 2023Updated 2 years ago
Alternatives and similar repositories for SQA3D
Users that are interested in SQA3D are comparing it to the libraries listed below
Sorting:
- ☆150Aug 23, 2023Updated 2 years ago
- Official implementation of ICCV 2023 paper "3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment"☆217Sep 7, 2023Updated 2 years ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆278Mar 19, 2025Updated 10 months ago
- General-purpose Visual Understanding Evaluation☆20Dec 21, 2023Updated 2 years ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆101Feb 26, 2023Updated 2 years ago
- [NeurIPS 2024] MSR3D: Advanced Situated Reasoning in 3D Scenes☆70Dec 2, 2025Updated 2 months ago
- [ICML 2024] LEO: An Embodied Generalist Agent in 3D World☆475Apr 20, 2025Updated 9 months ago
- ☆11Feb 1, 2023Updated 3 years ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆81Oct 10, 2024Updated last year
- Code for the ECCV22 paper "Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds"☆95Jun 9, 2023Updated 2 years ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆340Sep 20, 2024Updated last year
- Code accompanying our ECCV-2020 paper on 3D Neural Listeners.☆138Jun 29, 2021Updated 4 years ago
- [CVPR 2023] EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding☆132Oct 11, 2023Updated 2 years ago
- [IJCAI 2022] Spatiality-guided Transformer for 3D Dense Captioning on Point Clouds (official pytorch implementation)☆21Aug 31, 2022Updated 3 years ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆206Oct 20, 2025Updated 3 months ago
- (CVPR 2023) PLA: Language-Driven Open-Vocabulary 3D Scene Understanding & (CVPR2024) RegionPLC: Regional Point-Language Contrastive Learn…☆298Jun 28, 2024Updated last year
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆84Aug 2, 2024Updated last year
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds☆57Jan 29, 2023Updated 3 years ago
- Code for "SlotLifter: Slot-guided Feature Lifting for Learning Object-centric Radiance Fields" (ECCV 2024)☆12Oct 30, 2024Updated last year
- Code for ECCV 2020 paper - LEMMA: A Multi-view Dataset for LEarning Multi-agent Multi-task Activities☆30Apr 8, 2021Updated 4 years ago
- [ECCV2022] D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding☆44Aug 27, 2022Updated 3 years ago
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆45Jun 19, 2025Updated 7 months ago
- Code Release of "3D Concept Grounding on Neural Fields (NeurIPS2022)"☆15Feb 13, 2023Updated 3 years ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆132Jan 5, 2025Updated last year
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆56Mar 28, 2024Updated last year
- ☆56Oct 3, 2024Updated last year
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objects☆94Oct 18, 2025Updated 3 months ago
- ☆24Oct 8, 2023Updated 2 years ago
- Use Phobos to animate robot motions in Blender.☆15Jul 10, 2024Updated last year
- [CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans☆107Sep 6, 2022Updated 3 years ago
- ☆44Mar 27, 2023Updated 2 years ago
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆311Jul 17, 2024Updated last year
- [ECCV 2020] ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language☆295Feb 10, 2023Updated 3 years ago
- ☆63May 17, 2023Updated 2 years ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆651Jun 13, 2025Updated 8 months ago
- Awesome-LLM-3D: a curated list of Multi-modal Large Language Model in 3D world Resources☆2,115Feb 3, 2026Updated last week
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Oct 1, 2024Updated last year
- 3RScan Toolkit☆250May 26, 2022Updated 3 years ago
- [CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"☆84Jan 20, 2024Updated 2 years ago