SCAI-JHU / MuMA-ToMLinks
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
☆33Updated 10 months ago
Alternatives and similar repositories for MuMA-ToM
Users that are interested in MuMA-ToM are comparing it to the libraries listed below
Sorting:
- ☆132Updated last year
- [NeurIPS D&B Track 2024] Source code for the paper "Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge…☆22Updated 6 months ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆97Updated 5 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆53Updated 6 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆52Updated 6 months ago
- [ICML 2024] Language Models Represent Beliefs of Self and Others☆33Updated last year
- ☆28Updated 9 months ago
- [NeurIPS 2024] Official Implementation for Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks☆87Updated 5 months ago
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape. This work is accepted by ICCV 2025.☆34Updated 4 months ago
- Paper collections of the continuous effort start from World Models.☆188Updated last year
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆82Updated 5 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆32Updated last year
- ☆21Updated last year
- Official Repository of LatentSeek☆68Updated 5 months ago
- Code for ACM MM 2024 paper "A Picture Is Worth a Graph: A Blueprint Debate Paradigm for Multimodal Reasoning"☆19Updated 11 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 11 months ago
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆72Updated 5 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆89Updated last year
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆49Updated last year
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆48Updated last month
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated 3 weeks ago
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆14Updated 5 months ago
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆34Updated 2 years ago
- ☆27Updated 5 months ago
- ☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models☆19Updated 5 months ago
- ☆15Updated 3 weeks ago
- ☆104Updated 4 months ago
- Official codebase for the paper Latent Visual Reasoning☆37Updated last month
- A Self-Training Framework for Vision-Language Reasoning☆86Updated 10 months ago
- Imagine While Reasoning in Space: Multimodal Visualization-of-Thought (ICML 2025)☆59Updated 7 months ago