HaolinLiu97 / Refer-it-in-RGBDLinks
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021
☆40Updated last year
Alternatives and similar repositories for Refer-it-in-RGBD
Users that are interested in Refer-it-in-RGBD are comparing it to the libraries listed below
Sorting:
- [CVPR 2022] X-Trans2Cap: Cross-Modal Knowledge Transfer using Transformer for 3D Dense Captioning☆36Updated 2 years ago
- [CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans☆106Updated 2 years ago
- [IJCAI 2022] Spatiality-guided Transformer for 3D Dense Captioning on Point Clouds (official pytorch implementation)☆20Updated 2 years ago
- ☆24Updated 3 years ago
- SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)☆33Updated 3 years ago
- [CVPR 2022] Multi-View Transformer for 3D Visual Grounding☆77Updated 2 years ago
- [ICCV 2021] InstanceRefer: Cooperative Holistic Understanding for Visual Grounding on Point Clouds through Instance Multi-level Contextua…☆75Updated 4 months ago
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds☆55Updated 2 years ago
- Free-form Description-guided 3D Visual Graph Networks for Object Grounding in Point Cloud☆17Updated 3 years ago
- [ECCV2022] D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding☆43Updated 2 years ago
- Code accompanying our ECCV-2020 paper on 3D Neural Listeners.☆132Updated 4 years ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆100Updated 2 years ago
- [ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds☆42Updated 3 years ago
- This is the code related to "Context-aware Alignment and Mutual Masking for 3D-Language Pre-training" (CVPR 2023).☆29Updated 2 years ago
- ☆27Updated last year
- ☆11Updated 2 years ago
- [AAAI 2023 Oral] Language-Assisted 3D Feature Learning for Semantic Scene Understanding☆12Updated 2 years ago
- [TNNLS] Toward Explainable and Fine-Grained 3D Grounding through Referring Textual Phrases