dongyh20 / OctopusLinks
[ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.
β292Updated last year
Alternatives and similar repositories for Octopus
Users that are interested in Octopus are comparing it to the libraries listed below
Sorting:
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Lifeβ364Updated 11 months ago
- Open Platform for Embodied Agentsβ333Updated 10 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ330Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulaβ¦β97Updated 5 months ago
- [ICLR 2024] Source codes for the paper "Building Cooperative Embodied Agents Modularly with Large Language Models"β282Updated 7 months ago
- [CVPR2024] This is the official implement of MP5β106Updated last year
- β45Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D Worldβ134Updated last year
- Official implementation of WebVLN: Vision-and-Language Navigation on Websitesβ30Updated last year
- [ICML 2024] Official code repository for 3D embodied generalist agent LEOβ465Updated 7 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ370Updated last year
- [arXiv 2023] Embodied Task Planning with Large Language Modelsβ192Updated 2 years ago
- Code for "Learning to Model the World with Language." ICML 2024 Oral.β413Updated 2 years ago
- β132Updated last year
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasksβ179Updated last month
- β116Updated 7 months ago
- β96Updated last year
- Pandora: Towards General World Model with Natural Language Actions and Video Statesβ529Updated last year
- [ICCV'23] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Modelsβ208Updated 7 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β266Updated 8 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β330Updated last year
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.β214Updated last month
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"β107Updated 2 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chainβ103Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ460Updated 11 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasksβ87Updated 5 months ago
- Evaluate Multimodal LLMs as Embodied Agentsβ54Updated 9 months ago
- Towards Large Multimodal Models as Visual Foundation Agentsβ244Updated 6 months ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environmentβ39Updated last year
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents frameworkβ81Updated 5 months ago