thunlp / LEGENTLinks
Open Platform for Embodied Agents
β339Updated last year
Alternatives and similar repositories for LEGENT
Users that are interested in LEGENT are comparing it to the libraries listed below
Sorting:
- [ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β294Updated last year
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β364Updated 3 months ago
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β293Updated 10 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ186Updated 4 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β278Updated 11 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ373Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ193Updated 2 years ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ340Updated last year
- β61Updated 10 months ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ405Updated last year
- Towards Large Multimodal Models as Visual Foundation Agentsβ256Updated 9 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β335Updated 2 years ago
- [CVPR2024] This is the official implement of MP5β106Updated last year
- [ICML 2024] LEO: An Embodied Generalist Agent in 3D Worldβ475Updated 9 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.β262Updated 3 months ago
- Training VLM agents with multi-turn reinforcement learningβ391Updated last week
- Code for RoboFlamingoβ421Updated last year
- β133Updated last year
- β118Updated 10 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β379Updated 2 months ago
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ214Updated 10 months ago
- [ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learningβ198Updated last year
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learningβ635Updated 6 months ago
- This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Metβ¦β162Updated last year
- All about Robotics and AI Agents you need are hereβ34Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chainβ106Updated last year
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).β121Updated last year
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorldβ62Updated last year
- β46Updated 2 years ago
- Papers on integrating large language models with embodied AIβ36Updated 2 years ago