SCAI-JHU / MuMA-ToMLinks
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
☆32Updated 9 months ago
Alternatives and similar repositories for MuMA-ToM
Users that are interested in MuMA-ToM are comparing it to the libraries listed below
Sorting:
- ☆131Updated last year
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆51Updated 5 months ago
- [NeurIPS D&B Track 2024] Source code for the paper "Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge…☆22Updated 5 months ago
- [ICML 2024] Language Models Represent Beliefs of Self and Others☆33Updated last year
- ☆13Updated 11 months ago
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆67Updated 4 months ago
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape. This work is accepted by ICCV 2025.☆34Updated 3 months ago
- ☆28Updated 8 months ago
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆14Updated 4 months ago
- ☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models☆19Updated 4 months ago
- ☆15Updated 2 weeks ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆84Updated 4 months ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆95Updated 4 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆79Updated 4 months ago
- ☆84Updated last year
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆52Updated 5 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last year
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆78Updated last year
- ☆170Updated 10 months ago
- ☆34Updated 2 years ago
- Paper collections of the continuous effort start from World Models.☆186Updated last year
- [CVPR2024] This is the official implement of MP5☆105Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 9 months ago
- Official Repository of LatentSeek☆66Updated 4 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆49Updated 11 months ago
- ☆21Updated 11 months ago
- [NeurIPS 2024]Repos for "Visualization-of-Thought" dataset, construction code and evaluation.☆32Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆62Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 10 months ago