facebookresearch / open-eqaLinks
OpenEQA Embodied Question Answering in the Era of Foundation Models
☆327Updated last year
Alternatives and similar repositories for open-eqa
Users that are interested in open-eqa are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆464Updated 6 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆176Updated last month
- Compose multimodal datasets 🎹☆497Updated 2 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Models☆192Updated 2 years ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆272Updated 10 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆225Updated 7 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆205Updated last week
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆151Updated 2 years ago
- Official repo and evaluation implementation of VSI-Bench☆613Updated 2 months ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆292Updated last year
- Official Implementation of ReALFRED (ECCV'24)☆43Updated last year
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'☆208Updated last year
- Embodied Reasoning Question Answer (ERQA) Benchmark☆235Updated 7 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆394Updated 9 months ago
- [CVPR2024] This is the official implement of MP5☆105Updated last year
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆313Updated last month
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 8 months ago
- ☆99Updated 3 months ago
- ☆54Updated last year
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"☆329Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 10 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆137Updated 10 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆264Updated 7 months ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆205Updated 7 months ago
- ☆29Updated 2 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆334Updated last week
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model☆369Updated last year
- WorldVLA: Towards Autoregressive Action World Model☆472Updated 3 weeks ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆582Updated last year