remyxai / VQASynth
Compose multimodal datasets πΉ
β366Updated 3 weeks ago
Alternatives and similar repositories for VQASynth
Users that are interested in VQASynth are comparing it to the libraries listed below
Sorting:
- Official repo and evaluation implementation of VSI-Benchβ481Updated 2 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β190Updated 5 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ436Updated 3 weeks ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β253Updated 3 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ278Updated 7 months ago
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)β675Updated 10 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, β¦β114Updated last week
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β315Updated last month
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D Worldβ252Updated 5 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ248Updated 3 months ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)β161Updated last month
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AIβ590Updated 2 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"β242Updated last month
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β311Updated 4 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokensβ117Updated 4 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ151Updated 2 months ago
- β344Updated 3 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Modelβ495Updated 6 months ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.β494Updated 5 months ago
- β159Updated 2 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β339Updated last month
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ315Updated 2 weeks ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β236Updated last month
- π₯[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ209Updated last month
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ255Updated 2 weeks ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β199Updated 5 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learningβ231Updated last month
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ378Updated 2 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β501Updated 2 weeks ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsβ142Updated 8 months ago