thunlp / LEGENTLinks
Open Platform for Embodied Agents
β326Updated 7 months ago
Alternatives and similar repositories for LEGENT
Users that are interested in LEGENT are comparing it to the libraries listed below
Sorting:
- [ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β293Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β267Updated 5 months ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ163Updated 3 months ago
- [CVPR2024] This is the official implement of MP5β103Updated last year
- [CVPR 2025] RoboBrain: A Unified Brain Model for Robotic Manipulation from Abstract to Concrete. Official Repository.β299Updated 2 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β239Updated 5 months ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ384Updated 8 months ago
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ191Updated 2 years ago
- β48Updated 4 months ago
- Towards Large Multimodal Models as Visual Foundation Agentsβ230Updated 4 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ456Updated 4 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.β179Updated last month
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ368Updated last year
- Code for RoboFlamingoβ399Updated last year
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ312Updated 11 months ago
- β130Updated last year
- ALFWorld: Aligning Text and Embodied Environments for Interactive Learningβ510Updated last month
- A simulation platform for versatile Embodied AI research and developments.β980Updated last month
- Papers on integrating large language models with embodied AIβ35Updated last year
- This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Metβ¦β153Updated 11 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β319Updated last year
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorldβ57Updated 10 months ago
- RynnVLA-001: A Vision-Language-Action Model Boosted by Generative Priorsβ153Updated 2 weeks ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.β281Updated 3 weeks ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).β120Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ197Updated 5 months ago
- [ICLR 2024 Spotlight] Code for the paper "Text2Reward: Reward Shaping with Language Models for Reinforcement Learning"β173Updated 8 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.β297Updated 3 months ago
- β110Updated 4 months ago
- Video Prediction Policy: A Generalist Robot Policy with Predictive Visual Representations https://video-prediction-policy.github.ioβ252Updated 3 months ago