ch3cook-fdu / Vote2Cap-DETR
[CVPR 2023] Vote2Cap-DETR and [T-PAMI 2024] Vote2Cap-DETR++; A set-to-set perspective towards 3D Dense Captioning; State-of-the-Art 3D Dense Captioning methods
β89Updated 5 months ago
Alternatives and similar repositories for Vote2Cap-DETR:
Users that are interested in Vote2Cap-DETR are comparing it to the libraries listed below
- π up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.β115Updated 2 weeks ago
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Cloudsβ53Updated 2 years ago
- [MM2024 Oral] 3D-GRES: Generalized 3D Referring Expression Segmentationβ31Updated last month
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objectsβ79Updated last year
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"β20Updated last year
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilitiesβ64Updated 3 months ago
- [CVPR 2023] EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Groundingβ112Updated last year
- β56Updated last year
- Official implementation of Language Conditioned Spatial Relation Reasoning for 3D Object Grounding (NeurIPS'22).β57Updated 2 years ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Groundingβ47Updated 5 months ago
- [AAAI 2024] The official implementation of the paper "3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Referβ¦β37Updated last year
- This is a PyTorch implementation of MCLN proposed by our paper "Multi-branch Collaborative Learning Network for 3D Visual Grounding"(ECCVβ¦β14Updated 3 months ago
- [ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Cloudsβ41Updated 2 years ago
- This is the code related to "Context-aware Alignment and Mutual Masking for 3D-Language Pre-training" (CVPR 2023).β25Updated last year
- Code&Data for Grounded 3D-LLM with Referent Tokensβ98Updated 3 weeks ago
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Languβ¦β261Updated 6 months ago
- π Awesome lists of papers and codes about open-vocabulary perception, including both 3D and 2Dβ31Updated last month
- β30Updated 6 months ago
- Code for the ECCV22 paper "Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds"β83Updated last year
- Code for "Distilling Coarse-to-fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding" (ICCV 2023)β11Updated 3 months ago
- [ICCV 2023] PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learningβ243Updated last year
- [NeurIPS 2024] A Unified Framework for 3D Scene Understandingβ127Updated 2 months ago
- [AAAI 24] Official Codebase for BridgeQA: Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQAβ18Updated 6 months ago
- Official code of DMA: Dense Multimodal Alignment for Open-Vocabulary 3D Scene Understanding, ECCV 2024β25Updated 6 months ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.β96Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoningβ122Updated last year
- [3DV 2025] Reason3D: Searching and Reasoning 3D Segmentation via Large Language Modelβ54Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β100Updated last month
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"β50Updated 10 months ago
- β107Updated last year