thunlp / LEGENTLinks
Open Platform for Embodied Agents
β336Updated 11 months ago
Alternatives and similar repositories for LEGENT
Users that are interested in LEGENT are comparing it to the libraries listed below
Sorting:
- [ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β293Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β289Updated 9 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ185Updated 3 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ193Updated 2 years ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β276Updated 10 months ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ404Updated last year
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β355Updated 2 months ago
- [CVPR2024] This is the official implement of MP5β106Updated last year
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ371Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ471Updated 8 months ago
- Training VLM agents with multi-turn reinforcement learningβ365Updated last week
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ336Updated last year
- β60Updated 9 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.β249Updated 2 months ago
- β46Updated 2 years ago
- Code for RoboFlamingoβ417Updated last year
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ275Updated last month
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β331Updated last year
- β133Updated last year
- Towards Large Multimodal Models as Visual Foundation Agentsβ248Updated 8 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learningβ194Updated last year
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β363Updated last month
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorldβ62Updated last year
- This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Metβ¦β160Updated last year
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learningβ612Updated 5 months ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ210Updated 9 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)β85Updated 6 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β326Updated 3 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chainβ105Updated last year
- β118Updated 9 months ago