zhuyjan / MER2025-MRAC25Links
[ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.
☆25Updated last month
Alternatives and similar repositories for MER2025-MRAC25
Users that are interested in MER2025-MRAC25 are comparing it to the libraries listed below
Sorting:
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆118Updated last month
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 7 months ago
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆85Updated 3 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆27Updated 3 months ago
- ☆11Updated 4 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆53Updated 9 months ago
- ☆76Updated 3 months ago
- ☆37Updated 4 months ago
- ☆22Updated 11 months ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆36Updated 8 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆82Updated this week
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆75Updated 9 months ago
- official implementation of MGA-CLAP (ACM MM 2024)☆25Updated last year
- A unified tokenizer that is capable of both extracting semantic information and enabling high-fidelity audio reconstruction.☆131Updated 3 months ago
- ☆12Updated 11 months ago
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆32Updated 9 months ago
- ☆19Updated last year
- ☆34Updated last month
- ☆35Updated 7 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- [AAAI 2026] DIFFA: Large Language Diffusion Models Can Listen and Understand☆40Updated last month
- The code repo for ICASSP 2023 Paper "MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning"☆24Updated 2 years ago
- ☆50Updated 3 weeks ago
- A list of current Audio-Vision Multimodal with awesome resources (paper, application, data, review, survey, etc.).☆31Updated 2 years ago
- ☆127Updated 3 months ago
- Data Pipeline, Models, and Benchmark for Omni-Captioner.☆109Updated 2 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆136Updated last week
- Repository of the WACV'24 paper "Can CLIP Help Sound Source Localization?"☆33Updated 10 months ago
- The open source implementation of the cross attention mechanism from the paper: "JOINTLY TRAINING LARGE AUTOREGRESSIVE MULTIMODAL MODELS"☆36Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆95Updated last year