multimodal-art-projection / OmniBenchLinks
A project for tri-modal LLM benchmarking and instruction tuning.
☆53Updated 9 months ago
Alternatives and similar repositories for OmniBench
Users that are interested in OmniBench are comparing it to the libraries listed below
Sorting:
- ☆76Updated 3 months ago
- ☆19Updated last year
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆117Updated last month
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 7 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆75Updated 9 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆20Updated last week
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆85Updated 3 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 8 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆35Updated 3 months ago
- ☆37Updated 4 months ago
- Pytorch Implementation of the Model from "MIRASOL3B: A MULTIMODAL AUTOREGRESSIVE MODEL FOR TIME-ALIGNED AND CONTEXTUAL MODALITIES"☆26Updated 11 months ago
- Evaluate your agent memory on real-world dialogues, not LLM-simulated dialogues.☆35Updated 5 months ago
- The code and weight for LoVA. LoVA is a novel model for Long-form Video-to-Audio generation. Based on the Diffusion Transformer (DiT) arc…☆15Updated 10 months ago
- Demo page of TAVGBench: Benchmarking Text to Audible-Video Generation☆14Updated 8 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆52Updated 5 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆132Updated last week
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆35Updated 8 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated last month
- [ACL 2024] A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset☆19Updated 7 months ago
- [ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.☆25Updated last month
- MIO: A Foundation Model on Multimodal Tokens☆32Updated last year
- ☆78Updated 7 months ago
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems☆84Updated last year
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated last year
- This is the official repository of Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities☆28Updated 5 months ago
- ☆11Updated 4 months ago
- ☆183Updated 10 months ago
- ☆127Updated 3 months ago
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆15Updated last year