LaVi-Lab / Video-3D-LLM
[CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.
☆51Updated last week
Alternatives and similar repositories for Video-3D-LLM:
Users that are interested in Video-3D-LLM are comparing it to the libraries listed below
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆65Updated 5 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆60Updated 5 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆42Updated last week
- Code&Data for Grounded 3D-LLM with Referent Tokens☆105Updated 2 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆93Updated 3 months ago
- ☆48Updated 5 months ago
- Official implementation of Language Conditioned Spatial Relation Reasoning for 3D Object Grounding (NeurIPS'22).☆59Updated 2 years ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆73Updated 7 months ago
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆51Updated 11 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆29Updated 7 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 11 months ago
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds☆53Updated 2 years ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆36Updated 3 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆127Updated last year
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆23Updated last year
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆49Updated 7 months ago
- ☆113Updated last year
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆97Updated 2 years ago
- Official implementation of: Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel☆18Updated 3 months ago
- [ICCV 2023] Multi3DRefer: Grounding Text Description to Multiple 3D Objects☆81Updated last year
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆130Updated this week
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆21Updated 3 months ago
- ☆25Updated last year
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆83Updated 6 months ago
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆36Updated 2 weeks ago
- Accepted by CVPR 2024☆32Updated 9 months ago
- [ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds☆41Updated 2 years ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆133Updated 2 months ago
- ☆23Updated 3 months ago