SCAI-JHU / MuMA-ToMLinks
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
☆31Updated 8 months ago
Alternatives and similar repositories for MuMA-ToM
Users that are interested in MuMA-ToM are comparing it to the libraries listed below
Sorting:
- ☆131Updated last year
- ☆21Updated 10 months ago
- [ICML 2024] Language Models Represent Beliefs of Self and Others☆33Updated last year
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆62Updated 3 months ago
- Official Repository of LatentSeek☆60Updated 3 months ago
- [NeurIPS D&B Track 2024] Source code for the paper "Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge…☆21Updated 4 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆46Updated 4 months ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆24Updated 3 months ago
- ☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models☆19Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape. This work is accepted by ICCV 2025.☆34Updated 2 months ago
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆44Updated this week
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆14Updated 3 months ago
- ☆74Updated 9 months ago
- ☆28Updated 7 months ago
- Paper collections of the continuous effort start from World Models.☆184Updated last year
- [IROS'25 Oral & NeurIPSw'24] Official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simula…☆95Updated 3 months ago
- LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)☆79Updated 3 months ago
- ☆82Updated last year
- XL-VLMs: General Repository for eXplainable Large Vision Language Models☆33Updated 2 weeks ago
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆184Updated 4 months ago
- 🔥 Omni large models and datasets for understanding and generating multi-modalities.☆17Updated 11 months ago
- The official implement of "Grounded Chain-of-Thought for Multimodal Large Language Models"☆14Updated 2 months ago
- Imagine While Reasoning in Space: Multimodal Visualization-of-Thought (ICML 2025)☆48Updated 5 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆44Updated 4 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last year
- Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"☆46Updated 10 months ago
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆56Updated 3 months ago
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆77Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 8 months ago