OmniMMI / M4Links
[CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts
☆13Updated 8 months ago
Alternatives and similar repositories for M4
Users that are interested in M4 are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆20Updated 8 months ago
- ☆18Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆15Updated last year
- Codes for ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding [ICML 2025]]☆43Updated 4 months ago
- Extending context length of visual language models☆12Updated 11 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated 2 years ago
- ☆13Updated 7 months ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆47Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆52Updated 8 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated 3 weeks ago
- Multimodal RewardBench☆55Updated 9 months ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆19Updated 5 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆20Updated 9 months ago
- ☆62Updated 3 months ago
- Official repo for EscapeCraft (an 3D environment for room escape) and benchmark MM-Escape. This work is accepted by ICCV 2025.☆35Updated 5 months ago
- ☆35Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆50Updated 5 months ago
- Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆62Updated 7 months ago
- [NeurIPS2024] Official code for (IMA) Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs☆22Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆48Updated last year
- ☆50Updated 2 years ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated 11 months ago
- ☆21Updated last year
- More reliable Video Understanding Evaluation☆12Updated 2 months ago
- ☆23Updated 6 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 5 months ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆37Updated last year