fudan-zvg / S-AgentsLinks
Official repository of S-Agents: Self-organizing Agents in Open-ended Environment
☆26Updated last year
Alternatives and similar repositories for S-Agents
Users that are interested in S-Agents are comparing it to the libraries listed below
Sorting:
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆91Updated last month
- (ECCV 2024) Code for V-IRL: Grounding Virtual Intelligence in Real Life☆359Updated 8 months ago
- ☆72Updated 2 weeks ago
- [CVPR2024] This is the official implement of MP5☆103Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Updated last year
- Code and data for "Does Spatial Cognition Emerge in Frontier Models?"☆22Updated 3 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated 11 months ago
- Official implementation of "Self-Improving Video Generation"☆68Updated 3 months ago
- Official repo for StableLLAVA☆95Updated last year
- Multimodal RewardBench☆45Updated 5 months ago
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆35Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆73Updated last month
- ☆33Updated 2 years ago
- Official Implementation of "JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play Visual Games with Keyboards and Mouse"☆89Updated 2 months ago
- ☆45Updated 7 months ago
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated 2 years ago
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆292Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆125Updated 11 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆145Updated 8 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 5 months ago
- ☆109Updated 4 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆133Updated last year
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 11 months ago
- Official implementation of paper "ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting" (CVPR 2025)☆42Updated 3 months ago
- ☆36Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- ☆76Updated last year
- The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning".☆126Updated 3 weeks ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆105Updated last year
- ☆50Updated last year