Wayne-Mai / EgoLoc
For Ego4D VQ3D Task
☆19Updated last year
Alternatives and similar repositories for EgoLoc:
Users that are interested in EgoLoc are comparing it to the libraries listed below
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆64Updated 4 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆37Updated 2 weeks ago
- [3DV 2025] Reason3D: Searching and Reasoning 3D Segmentation via Large Language Model☆60Updated last month
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆74Updated 2 weeks ago
- CVPR 2024 "Instance Tracking in 3D Scenes from Egocentric Videos"☆18Updated 7 months ago
- Official implementation of Language Conditioned Spatial Relation Reasoning for 3D Object Grounding (NeurIPS'22).☆58Updated 2 years ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆65Updated 6 months ago
- ☆40Updated last year
- ☆48Updated 4 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆35Updated last year
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆28Updated 6 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆90Updated 3 months ago
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds☆53Updated 2 years ago
- ☆57Updated last year
- ☆109Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆124Updated last year
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆114Updated 2 months ago
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆20Updated last year
- Code for the ECCV22 paper "Bottom Up Top Down Detection Transformers for Language Grounding in Images and Point Clouds"☆83Updated last year
- The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆46Updated last month
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 10 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆77Updated 7 months ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆96Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated 4 months ago
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆47Updated 6 months ago
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆50Updated 10 months ago
- This is the code related to "Context-aware Alignment and Mutual Masking for 3D-Language Pre-training" (CVPR 2023).☆26Updated last year
- [MM2024 Oral] 3D-GRES: Generalized 3D Referring Expression Segmentation☆32Updated 2 months ago
- [CVPR 2024] The official implementation of paper "Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training"☆32Updated 10 months ago