JeffWang987 / EgoVidLinks
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
☆116Updated last month
Alternatives and similar repositories for EgoVid
Users that are interested in EgoVid are comparing it to the libraries listed below
Sorting:
- A list of works on video generation towards world model☆165Updated last month
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆152Updated 3 months ago
- ☆89Updated last month
- From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D☆57Updated 3 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆41Updated 9 months ago
- ☆167Updated 6 months ago
- InternScenes: A Large-scale Interactive Indoor Scene Dataset with Realistic Layouts.☆92Updated last month
- Generative World Explorer☆155Updated 3 months ago
- [ARXIV’25] Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control☆78Updated 2 months ago
- Code for our paper: Learning Camera Movement Control from Real-World Drone Videos☆31Updated 5 months ago
- Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆83Updated last month
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- Official implementation for WorldScore: A Unified Evaluation Benchmark for World Generation☆136Updated last month
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆80Updated last year
- Official Implementation of paper "Telling Left from Right: Identifying Geometry-Aware Semantic Correspondence"☆133Updated last month
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction☆263Updated 2 weeks ago
- Unifying 2D and 3D Vision-Language Understanding☆104Updated last month
- [ICCV'25] Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness☆55Updated last month
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆161Updated 3 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆125Updated 8 months ago
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆65Updated last month
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆55Updated 4 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆62Updated last month
- OmniWorld: A Multi-Domain and Multi-Modal Dataset for 4D World Modeling☆264Updated this week
- The code for paper 'Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors'☆123Updated this week
- ☆139Updated 8 months ago
- The official implementation of The paper "Exploring the Potential of Encoder-free Architectures in 3D LMMs"☆55Updated 4 months ago
- Official Reporsitory of "EgoMono4D: Self-Supervised Monocular 4D Scene Reconstruction for Egocentric Videos"☆34Updated 3 weeks ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆80Updated 3 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆61Updated 11 months ago