fudan-zvg / S-AgentsLinks
Official repository of S-Agents: Self-organizing Agents in Open-ended Environment
☆27Updated last year
Alternatives and similar repositories for S-Agents
Users that are interested in S-Agents are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆95Updated 3 months ago
- Official implementation of "Self-Improving Video Generation"☆72Updated 5 months ago
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Life☆360Updated 10 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated last year
- Multimodal RewardBench☆53Updated 7 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆45Updated 5 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 3 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆103Updated last year
- ☆33Updated 2 years ago
- ☆45Updated 9 months ago
- [CVPR2024] This is the official implement of MP5☆104Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 7 months ago
- Official repo for StableLLAVA☆94Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆57Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆37Updated last year
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 8 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆73Updated 10 months ago
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆139Updated 3 weeks ago
- ☆60Updated last month
- ☆88Updated 2 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆141Updated 2 weeks ago
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆133Updated this week
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated 10 months ago
- [NeurIPS-2024] The offical Implementation of "Instruction-Guided Visual Masking"☆38Updated 10 months ago
- ☆90Updated 3 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆153Updated last year
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆215Updated this week
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆47Updated 8 months ago