fudan-zvg / S-AgentsLinks
Official repository of S-Agents: Self-organizing Agents in Open-ended Environment
☆26Updated last year
Alternatives and similar repositories for S-Agents
Users that are interested in S-Agents are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆100Updated 7 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR'25)☆46Updated 9 months ago
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- Official implementation of "Self-Improving Video Generation"☆78Updated 9 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Updated 2 years ago
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Life☆367Updated last year
- ☆34Updated 2 years ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated 6 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆79Updated last year
- ☆46Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 6 months ago
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆138Updated 3 months ago
- PhysGame Benchmark for Physical Commonsense Evaluation in Gameplay Videos☆47Updated 6 months ago
- Egocentric Video Understanding Dataset (EVUD)☆32Updated last year
- Code and data for "Does Spatial Cognition Emerge in Frontier Models?"☆27Updated 9 months ago
- ☆114Updated 6 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆145Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆39Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆158Updated 4 months ago
- ☆30Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆64Updated 11 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- This repository is a collection of research papers on World Models.☆43Updated 2 years ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆122Updated 5 months ago
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- ☆30Updated last year
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆152Updated 4 months ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆294Updated last year
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆39Updated last year
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆189Updated 11 months ago