Code&Data for Grounded 3D-LLM with Referent Tokens
☆132Jan 5, 2025Updated last year
Alternatives and similar repositories for Grounded_3D-LLM
Users that are interested in Grounded_3D-LLM are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆81Oct 10, 2024Updated last year
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆311Jul 17, 2024Updated last year
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆652Jun 13, 2025Updated 8 months ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆206Oct 20, 2025Updated 4 months ago
- [NeurIPS 2024 Oral] RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation☆19Dec 22, 2024Updated last year
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆84Aug 2, 2024Updated last year
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆373Oct 21, 2025Updated 4 months ago
- [3DV 2025] Reason3D: Searching and Reasoning 3D Segmentation via Large Language Model☆116May 30, 2025Updated 9 months ago
- [ACM MM-2024] RefMask3D: Language-Guided Transformer for 3D Referring Segmentation☆66Jul 29, 2024Updated last year
- [ICML 2024] LEO: An Embodied Generalist Agent in 3D World☆476Apr 20, 2025Updated 10 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆278Mar 19, 2025Updated 11 months ago
- [ECCV 2024 Best Paper Candidate & TPAMI 2025] PointLLM: Empowering Large Language Models to Understand Point Clouds☆975Aug 14, 2025Updated 6 months ago
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆26Aug 24, 2023Updated 2 years ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆129May 22, 2025Updated 9 months ago
- [NeurIPS 2024] A Unified Framework for 3D Scene Understanding☆172Jul 7, 2025Updated 7 months ago
- ☆56Oct 3, 2024Updated last year
- [MM2024 Oral] 3D-GRES: Generalized 3D Referring Expression Segmentation☆42Dec 15, 2024Updated last year
- [ECCV 2024] ShapeLLM: Universal 3D Object Understanding for Embodied Interaction☆225Oct 8, 2024Updated last year
- Awesome-LLM-3D: a curated list of Multi-modal Large Language Model in 3D world Resources☆2,117Feb 3, 2026Updated 3 weeks ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆62Aug 3, 2024Updated last year
- Official implementation of PARIS3D (Accepted to ECCV 2024).☆27Sep 25, 2024Updated last year
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆43Dec 9, 2024Updated last year
- Code for 3D-LLM: Injecting the 3D World into Large Language Models☆1,177Jun 6, 2024Updated last year
- [ECCV'24] OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation☆204Oct 19, 2024Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆156Oct 13, 2023Updated 2 years ago
- [NeurIPS 2023] Weakly Supervised 3D Open-vocabulary Segmentation☆125Jan 11, 2024Updated 2 years ago
- [AAAI 2024] The official implementation of the paper "3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Refer…☆44Dec 20, 2023Updated 2 years ago
- Unifying 2D and 3D Vision-Language Understanding☆121Jul 23, 2025Updated 7 months ago
- ☆151Aug 23, 2023Updated 2 years ago
- [ECCV 2024] TOD3Cap: Towards 3D Dense Captioning in Outdoor Scenes☆129Mar 1, 2025Updated last year
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Oct 1, 2024Updated last year
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆88Jun 6, 2025Updated 8 months ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆53Jun 13, 2024Updated last year
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆261Jan 14, 2026Updated last month
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objects☆94Oct 18, 2025Updated 4 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆92Jun 24, 2024Updated last year
- ☆13Apr 24, 2023Updated 2 years ago
- The Most Faithful Implementation of Segment Anything (SAM) in 3D☆353Sep 11, 2024Updated last year
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆31Jul 26, 2024Updated last year