JaaackHongggg / WorldSenseLinks
WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs
☆25Updated 2 months ago
Alternatives and similar repositories for WorldSense
Users that are interested in WorldSense are comparing it to the libraries listed below
Sorting:
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 6 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 6 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆46Updated 5 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆65Updated 9 months ago
- ☆32Updated 3 weeks ago
- Official implement of MIA-DPO☆58Updated 5 months ago
- 🚀 Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆23Updated 2 weeks ago
- On Path to Multimodal Generalist: General-Level and General-Bench☆14Updated last month
- ☆80Updated 5 months ago
- ☆49Updated 2 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 8 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- ☆37Updated 11 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆65Updated 2 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆117Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆61Updated 2 weeks ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆73Updated 2 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆85Updated 3 weeks ago
- ☆30Updated 10 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆40Updated 2 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆66Updated last month
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 3 months ago
- ☆28Updated 3 weeks ago
- TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆51Updated last week
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated 10 months ago
- Official PyTorch Code of ReKV (ICLR'25)☆28Updated 3 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 3 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆60Updated last year
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆29Updated last week