XingruiWang / 3D-Aware-VQALinks
Official Code for the NeurIPS'23 paper "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"
☆19Updated last year
Alternatives and similar repositories for 3D-Aware-VQA
Users that are interested in 3D-Aware-VQA are comparing it to the libraries listed below
Sorting:
- Omni6D: Large-Vocabulary 3D Object Dataset for Category-Level 6D Object Pose Estimation☆48Updated last year
- IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos☆55Updated 9 months ago
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆48Updated last year
- ☆44Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆42Updated last year
- code for affordance-r1☆50Updated 3 weeks ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆83Updated last year
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding☆19Updated last month
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆17Updated 10 months ago
- Code Release of "3D Concept Grounding on Neural Fields (NeurIPS2022)"☆15Updated 2 years ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆69Updated last month
- [CVPR 2024] The official implementation of paper "Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training"☆36Updated last year
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- (Incomplete version) This is an implementation of affordancellm.☆17Updated last year
- 3DAffordSplat: Efficient Affordance Reasoning with 3D Gaussians (ACM MM 25)☆67Updated 5 months ago
- ImageNet3D: Towards General-Purpose Object-Level 3D Understanding☆19Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- CVPR 2025☆39Updated last month
- ☆18Updated last year
- ☆54Updated last year
- (ECCV 2022 Oral) TO-Scene: A Large-scale Dataset for Understanding 3D Tabletop Scenes☆57Updated 11 months ago
- ☆12Updated 8 months ago
- ☆46Updated last year
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- ☆37Updated last week
- Official implementation of PARIS3D (Accepted to ECCV 2024).☆27Updated last year
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆101Updated 2 years ago
- [ICLR 2025 Oral] Official Implementation for "Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Un…☆18Updated last year