SCAI-JHU / MuMA-ToMLinks
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
☆37Updated 11 months ago
Alternatives and similar repositories for MuMA-ToM
Users that are interested in MuMA-ToM are comparing it to the libraries listed below
Sorting:
- ☆133Updated last year
- [NeurIPS D&B Track 2024] Source code for the paper "Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge…☆23Updated 8 months ago
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape. This work is accepted by ICCV 2025.☆36Updated 6 months ago
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆99Updated 6 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆67Updated 8 months ago
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆50Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆91Updated last year
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆81Updated 3 weeks ago
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆14Updated 7 months ago
- [CVPR2024] This is the official implement of MP5☆106Updated last year
- [NeurIPS'25] The official code of "PeRL: Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning"☆27Updated 3 months ago
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆52Updated 3 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆55Updated last month
- ☆21Updated last year
- ☆87Updated last year
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆78Updated last year
- Official Repository of LatentSeek☆73Updated 7 months ago
- ☆16Updated 3 months ago
- ☆28Updated 11 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- [ICML 2024] Language Models Represent Beliefs of Self and Others☆35Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- ☆25Updated 7 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆85Updated 7 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆40Updated 7 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆33Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆84Updated 2 months ago
- ☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models☆19Updated 7 months ago
- ☆112Updated 5 months ago