remyxai / VQASynth
Compose multimodal datasets πΉ
β309Updated this week
Alternatives and similar repositories for VQASynth:
Users that are interested in VQASynth are comparing it to the libraries listed below
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)β610Updated 8 months ago
- Official repo and evaluation implementation of VSI-Benchβ410Updated 2 weeks ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ260Updated 6 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"β138Updated 3 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β219Updated last month
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Modelβ449Updated 4 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ417Updated 2 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β271Updated 2 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β477Updated 7 months ago
- β156Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ186Updated last month
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D Worldβ127Updated 4 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β177Updated 2 weeks ago
- π₯[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ195Updated this week
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsβ139Updated 6 months ago
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inferenceβ268Updated 2 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulationβ191Updated last month
- β369Updated 10 months ago
- Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"β231Updated last month
- β299Updated last month
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D Worldβ224Updated 3 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β179Updated 3 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β847Updated 3 months ago
- β316Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agentsβ309Updated 11 months ago
- [ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentationβ369Updated last year
- [ICLR 2023] SQA3D for embodied scene understanding and reasoningβ127Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ315Updated 8 months ago
- [CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AIβ562Updated 3 weeks ago