VIRL-Platform / VIRLLinks
(ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Life
β366Updated last year
Alternatives and similar repositories for VIRL
Users that are interested in VIRL are comparing it to the libraries listed below
Sorting:
- [ECCV2024] πOctopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.β293Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ458Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agentsβ317Updated last year
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learningβ136Updated 2 months ago
- Pandora: Towards General World Model with Natural Language Actions and Video Statesβ533Updated last year
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ336Updated last year
- Official repo and evaluation implementation of VSI-Benchβ658Updated 5 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Modelsβ280Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMsβ145Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulaβ¦β99Updated 6 months ago
- Compose multimodal datasets πΉβ532Updated 5 months ago
- β639Updated last year
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Modelsβ275Updated 5 months ago
- β112Updated 5 months ago
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)β146Updated 7 months ago
- Open Platform for Embodied Agentsβ336Updated 11 months ago
- Official repository of S-Agents: Self-organizing Agents in Open-ended Environmentβ26Updated last year
- [ICRA 2024] Chat with NeRF enables users to interact with a NeRF model by typing in natural language.β318Updated 2 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]β77Updated 6 months ago
- Long Context Transfer from Language to Visionβ398Updated 9 months ago
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ370Updated 9 months ago
- Official implementation of paper: SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-trainingβ313Updated 8 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]β237Updated this week
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.orβ¦β153Updated 3 months ago
- Visual Planning: Let's Think Only with Imagesβ290Updated 7 months ago
- (VillagerAgent ACL 2024) A Graph based Minecraft multi agents frameworkβ82Updated 6 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ411Updated 8 months ago
- [ECCV 2024] STEVE in Minecraft is for See and Think: Embodied Agent in Virtual Environmentβ39Updated 2 years ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMsβ157Updated last year
- Official implementation of ICCV 2023 paper "3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment"β217Updated 2 years ago