facebookresearch / open-eqa
OpenEQA Embodied Question Answering in the Era of Foundation Models
β250Updated 3 months ago
Alternatives and similar repositories for open-eqa:
Users that are interested in open-eqa are comparing it to the libraries listed below
- πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β278Updated 7 months ago
- Compose multimodal datasets πΉβ261Updated last month
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ162Updated 7 months ago
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β242Updated 2 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ167Updated last year
- β123Updated 6 months ago
- LLaRA: Large Language and Robotics Assistantβ163Updated 3 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ348Updated 6 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D Worldβ124Updated 2 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsβ132Updated 4 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ396Updated 3 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β151Updated this week
- [CVPR 2024] The code for paper 'Towards Learning a Generalist Model for Embodied Navigation'β150Updated 7 months ago
- LAPA: Latent Action Pretraining from Videosβ136Updated 3 weeks ago
- Official repo and evaluation implementation of VSI-Benchβ326Updated this week
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)β65Updated 5 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β121Updated 4 months ago
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)β543Updated 6 months ago
- Official repository of Learning to Act from Actionless Videos through Dense Correspondences.β191Updated 8 months ago
- Paper collections of the continuous effort start from World Models.β161Updated 6 months ago
- β65Updated last month
- Official Implementation of ReALFRED (ECCV'24)β31Updated 3 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Modelβ403Updated 2 months ago
- VideoLLM-online: Online Video Large Language Model for Streaming Video (CVPR 2024)β280Updated 5 months ago
- A repository accompanying the PARTNR benchmark for using Large Planning Models (LPMs) to solve Human-Robot Collaboration or Robot Instrucβ¦β87Updated last month
- [ICLR 2023] SQA3D for embodied scene understanding and reasoningβ123Updated last year
- Code for "Learning to Model the World with Language." ICML 2024 Oral.β376Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agentsβ306Updated 9 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"β89Updated 2 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"β186Updated 2 weeks ago