dongyh20 / OctopusLinks
[ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.
β293Updated last year
Alternatives and similar repositories for Octopus
Users that are interested in Octopus are comparing it to the libraries listed below
Sorting:
- Open Platform for Embodied Agentsβ334Updated 11 months ago
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Lifeβ365Updated last year
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β287Updated 8 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ333Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulaβ¦β97Updated 6 months ago
- [CVPR2024] This is the official implement of MP5β106Updated last year
- Code for "Learning to Model the World with Language." ICML 2024 Oral.β414Updated 2 years ago
- β46Updated 2 years ago
- β118Updated 8 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D Worldβ134Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ470Updated 7 months ago
- β133Updated last year
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ370Updated last year
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"β112Updated 3 months ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environmentβ39Updated last year
- β96Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ193Updated 2 years ago
- Official implementation of WebVLN: Vision-and-Language Navigation on Websitesβ30Updated last year
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents frameworkβ82Updated 6 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β332Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ209Updated 8 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasksβ89Updated 6 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ460Updated last year
- [ICRA 2024] Chat with NeRF enables users to interact with a NeRF model by typing in natural language.β318Updated 2 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β273Updated 9 months ago
- Implementation of "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agenβ¦β289Updated 2 years ago
- GPT-4V in Wonderland: LMMs as Smartphone Agentsβ135Updated last year
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ185Updated 2 months ago
- GROOT: Learning to Follow Instructions by Watching Gameplay Videos (ICLR'24, Spotlight)β65Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chainβ105Updated last year