xlang-ai / text2rewardLinks
[ICLR 2024 Spotlight] Text2Reward: Reward Shaping with Language Models for Reinforcement Learning
☆187Updated 11 months ago
Alternatives and similar repositories for text2reward
Users that are interested in text2reward are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"☆282Updated 7 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆266Updated 8 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆214Updated 3 weeks ago
- ProgPrompt for Virtualhome☆141Updated 2 years ago
- Source codes for the paper "COMBO: Compositional World Models for Embodied Multi-Agent Cooperation"☆44Updated 8 months ago
- ☆132Updated last year
- The source code of the paper "Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Pla…☆105Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models☆208Updated 7 months ago
- Paper collections of the continuous effort start from World Models.☆188Updated last year
- Uni-RLHF platform for "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" (ICLR2024…☆41Updated 11 months ago
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆81Updated 5 months ago
- ☆88Updated 2 years ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆81Updated 5 months ago
- Implementation of TWOSOME☆82Updated 10 months ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆227Updated 2 years ago
- ☆87Updated 2 weeks ago
- Official code for the paper: Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld☆59Updated last year
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆276Updated 3 years ago
- ☆45Updated last year
- Official code release of AAAI 2024 paper SayCanPay.☆50Updated 3 weeks ago
- ☆65Updated last year
- Code for Reinforcement Learning from Vision Language Foundation Model Feedback☆126Updated last year
- CivRealm is an interactive environment for the open-source strategy game Freeciv-web based on Freeciv, a Civilization-inspired game.☆129Updated last year
- A comprehensive list of PAPERS, CODEBASES, and, DATASETS on Decision Making using Foundation Models including LLMs and VLMs.☆381Updated last year
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agen…☆289Updated 2 years ago
- ☆46Updated last year
- We perform functional grounding of LLMs' knowledge in BabyAI-Text☆275Updated 3 weeks ago
- An RL-Friendly Vision-Language Model for Minecraft☆38Updated last year
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆143Updated last year
- Code for the paper Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance, accepted to CoRL 2023 as an…☆35Updated 4 months ago