LaVi-Lab / Video-3D-LLM
The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.
☆46Updated last month
Alternatives and similar repositories for Video-3D-LLM:
Users that are interested in Video-3D-LLM are comparing it to the libraries listed below
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆64Updated 4 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆37Updated 2 weeks ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆98Updated last month
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆90Updated 3 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆58Updated 4 months ago
- ☆48Updated 4 months ago
- Code of 3DMIT: 3D MULTI-MODAL INSTRUCTION TUNING FOR SCENE UNDERSTANDING☆28Updated 6 months ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆65Updated 6 months ago
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆124Updated last year
- [CVPR 2024] Visual Programming for Zero-shot Open-Vocabulary 3D Visual Grounding☆47Updated 6 months ago
- Code for "Chat-3D: Data-efficiently Tuning Large Language Model for Universal Dialogue of 3D Scenes"☆50Updated 10 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 10 months ago
- Official implementation of: Bootstrapping Language-Guided Navigation Learning with Self-Refining Data Flywheel☆17Updated 2 months ago
- ☆66Updated 2 months ago
- 4D Panoptic Scene Graph Generation (NeurIPS'23 Spotlight)☆100Updated 9 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆35Updated 2 months ago
- [NeurIPS 2024] Official code for HourVideo: 1-Hour Video Language Understanding☆62Updated last month
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆114Updated 2 months ago
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds☆53Updated 2 years ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆56Updated 2 months ago
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆36Updated this week
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.☆96Updated last year
- ☆109Updated last year
- [NeurIPS 2024] Lexicon3D: Probing Visual Foundation Models for Complex 3D Scene Understanding☆72Updated 2 weeks ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆81Updated 5 months ago
- Official implementation of Language Conditioned Spatial Relation Reasoning for 3D Object Grounding (NeurIPS'22).☆58Updated 2 years ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆82Updated 2 months ago
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆20Updated last year