liudaizong / Awesome-3D-Visual-GroundingLinks
π up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.
β222Updated last week
Alternatives and similar repositories for Awesome-3D-Visual-Grounding
Users that are interested in Awesome-3D-Visual-Grounding are comparing it to the libraries listed below
Sorting:
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)β194Updated 6 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.β165Updated 4 months ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Groundingβ175Updated 5 months ago
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Languβ¦β306Updated last year
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"β263Updated 6 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokensβ126Updated 9 months ago
- β56Updated 6 months ago
- β138Updated 2 years ago
- [CVPR 2025] 3D-LLaVA: Towards Generalist 3D LMMs with Omni Superpoint Transformerβ66Updated 4 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β264Updated 9 months ago
- [CVPR 2024] Open3DSG: Open-Vocabulary 3D Scene Graphs from Point Clouds with Queryable Objects and Open-Set Relationshipsβ129Updated last year
- (CVPR 2023) PLA: Language-Driven Open-Vocabulary 3D Scene Understanding & (CVPR2024) RegionPLC: Regional Point-Language Contrastive Learnβ¦β292Updated last year
- [CVPR 2023] Vote2Cap-DETR and [T-PAMI 2024] Vote2Cap-DETR++; A set-to-set perspective towards 3D Dense Captioning; State-of-the-Art 3D Deβ¦β100Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoningβ147Updated last year
- [CVPR 2023] EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Groundingβ127Updated last year
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilitiesβ80Updated 11 months ago
- [NeurIPS 2024] Official code repository for MSR3D paperβ64Updated 2 months ago
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objectsβ89Updated last year
- β63Updated 2 years ago
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'β133Updated last week
- [NeurIPS 2024] A Unified Framework for 3D Scene Understandingβ154Updated 2 months ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"β81Updated last year
- [ECCV 2024] ShapeLLM: Universal 3D Object Understanding for Embodied Interactionβ206Updated 11 months ago
- CVPR2023 : VL-SAT: Visual-Linguistic Semantics Assisted Training for 3D Semantic Scene Graph Prediction in Point Cloudβ87Updated last year
- [NeurIPS 2025] MLLMs Need 3D-Aware Representation Supervision for Scene Understandingβ103Updated 2 weeks ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Groundingβ55Updated last year
- [ICLR 2025] Intent3D: 3D Object Detection in RGB-D Scans Based on Human Intentionβ26Updated 7 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDINGβ31Updated last year
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Groundingβ114Updated 4 months ago
- [CVPR 2025 Highlightπ₯] Official code repository for "Inst3D-LMM: Instance-Aware 3D Scene Understanding with Multi-modal Instruction Tuniβ¦β113Updated 2 months ago