thunlp / LEGENTLinks
Open Platform for Embodied Agents
β337Updated last year
Alternatives and similar repositories for LEGENT
Users that are interested in LEGENT are comparing it to the libraries listed below
Sorting:
- [ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β294Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β291Updated 10 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ186Updated 4 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ194Updated 2 years ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β278Updated 10 months ago
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β361Updated 3 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ339Updated last year
- [CVPR2024] This is the official implement of MP5β106Updated last year
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ371Updated last year
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ405Updated last year
- Towards Large Multimodal Models as Visual Foundation Agentsβ254Updated 9 months ago
- β61Updated 9 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ472Updated 9 months ago
- Training VLM agents with multi-turn reinforcement learningβ381Updated this week
- β133Updated last year
- Code for RoboFlamingoβ421Updated last year
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β333Updated 2 years ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ211Updated 10 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β376Updated 2 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.β260Updated 3 months ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorldβ62Updated last year
- This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Metβ¦β161Updated last year
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ280Updated last week
- β46Updated 2 years ago
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Lifeβ367Updated last year
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learningβ620Updated 6 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learningβ195Updated last year
- β118Updated 9 months ago
- Evaluate Multimodal LLMs as Embodied Agentsβ57Updated 11 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).β121Updated last year