remyxai / VQASynthLinks
Compose multimodal datasets 🎹
☆525Updated 4 months ago
Alternatives and similar repositories for VQASynth
Users that are interested in VQASynth are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆298Updated last year
- Official repo and evaluation implementation of VSI-Bench☆655Updated 4 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆334Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆471Updated 8 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆325Updated 3 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆358Updated 2 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI☆646Updated 6 months ago
- A paper list for spatial reasoning☆550Updated this week
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆609Updated last year
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)☆884Updated last year
- Official implementation of Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence☆411Updated 3 weeks ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆201Updated 2 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"☆274Updated 9 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆277Updated last year
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆196Updated 7 months ago
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆253Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆424Updated 11 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 8 months ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆521Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆154Updated 2 years ago
- Code for the Molmo Vision-Language Model☆845Updated last year
- ☆147Updated 2 years ago
- ☆112Updated 5 months ago
- ☆170Updated 10 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆319Updated last week
- Unified Vision-Language-Action Model☆256Updated 2 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆131Updated 11 months ago
- [CVPR 2025] The code for paper ''Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene Understanding''.☆187Updated 6 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆250Updated 9 months ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆215Updated last week