UCSB-NLP-Chang / Visual-Spatial-PlanningLinks
Official release of the benchmark in paper "VSP: Diagnosing the Dual Challenges of Perception and Reasoning in Spatial Planning Tasks for MLLMs"
☆15Updated 5 months ago
Alternatives and similar repositories for Visual-Spatial-Planning
Users that are interested in Visual-Spatial-Planning are comparing it to the libraries listed below
Sorting:
- ☆133Updated last year
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆98Updated 8 months ago
- Codebase for HiP☆90Updated 2 years ago
- Instruction Following Agents with Multimodal Transforemrs☆53Updated 3 years ago
- ☆33Updated last year
- Code for "Interactive Task Planning with Language Models"☆33Updated this week
- maze datasets for investigating OOD behavior of ML systems☆70Updated 2 months ago
- ☆108Updated last week
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆130Updated 3 years ago
- ☆56Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆99Updated 7 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆85Updated 7 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆78Updated last year
- Official repository of ICLR 2022 paper FILM: Following Instructions in Language with Modular Methods☆127Updated 2 years ago
- 🐍 A Python Package for Seamless Data Distribution in AI Workflows☆25Updated 2 years ago
- ☆36Updated 2 years ago
- ☆46Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆56Updated 11 months ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆45Updated 10 months ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆139Updated 2 years ago
- Repository for DialFRED.☆46Updated 2 years ago
- ☆67Updated last year
- A mini-framework for running AI2-Thor with Docker.☆37Updated last year
- ☆36Updated 2 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆56Updated 3 months ago
- Official Repo of LangSuitE☆84Updated last year
- ☆78Updated 7 months ago
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)☆66Updated 2 years ago
- Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal tra…☆93Updated 2 years ago
- TEACh is a dataset of human-human interactive dialogues to complete tasks in a simulated household environment.☆143Updated last year