JaaackHongggg / WorldSenseLinks
WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs
☆31Updated last month
Alternatives and similar repositories for WorldSense
Users that are interested in WorldSense are comparing it to the libraries listed below
Sorting:
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 4 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆49Updated 3 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 10 months ago
- Official implement of MIA-DPO☆66Updated 9 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- Official PyTorch Code of ReKV (ICLR'25)☆62Updated 7 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆40Updated 3 months ago
- ☆14Updated this week
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆28Updated 2 months ago
- ☆33Updated 11 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆100Updated 3 months ago
- [NeurIPS'25] ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding☆37Updated last month
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 6 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆125Updated 3 months ago
- Official implementation of paper VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interact…☆36Updated 8 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 4 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆54Updated 7 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆55Updated 4 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆95Updated last month
- 🚀 Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆33Updated 3 months ago
- ☆60Updated last month
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆66Updated 3 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated last month
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆77Updated last month
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 5 months ago
- 🔥🔥🔥 Latest Papers, Codes and Datasets on Video-LMM Post-Training☆142Updated this week
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆88Updated last month
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆25Updated 7 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆49Updated 7 months ago