thunlp / EmbodiedEvalLinks
Evaluate Multimodal LLMs as Embodied Agents
☆54Updated 7 months ago
Alternatives and similar repositories for EmbodiedEval
Users that are interested in EmbodiedEval are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆187Updated 2 months ago
- ☆32Updated last year
- ☆56Updated 9 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆132Updated 11 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆110Updated last month
- ☆80Updated last year
- ☆76Updated 4 months ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆58Updated last year
- ☆30Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆142Updated 6 months ago
- [CVPR2024] This is the official implement of MP5☆104Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆83Updated last month
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆225Updated 6 months ago
- Official Implementation of CAPEAM (ICCV'23)☆13Updated 10 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Models☆191Updated 2 years ago
- Responsible Robotic Manipulation☆12Updated last month
- Official Implementation of ReALFRED (ECCV'24)☆43Updated 11 months ago
- ☆27Updated last month
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆157Updated last month
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆105Updated 5 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆43Updated 2 weeks ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆79Updated 3 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆81Updated 4 months ago
- Code for "Interactive Task Planning with Language Models"☆32Updated 5 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 10 months ago
- HAZARD challenge☆36Updated 5 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆84Updated 4 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year