zhuyjan / MER2025-MRAC25Links
[ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.
β25Updated 2 months ago
Alternatives and similar repositories for MER2025-MRAC25
Users that are interested in MER2025-MRAC25 are comparing it to the libraries listed below
Sorting:
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [π₯The Exploration of R1 for General Audio-Viβ¦β70Updated 8 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoningβ¦β28Updated 4 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignβ¦β121Updated 2 months ago
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cuesβ86Updated 3 weeks ago
- "Omni-R1: Towards the Unified Generative Paradigm for Multimodal Reasoning"β35Updated this week
- β11Updated 5 months ago
- β76Updated 4 months ago
- official implementation of MGA-CLAP (ACM MM 2024)β28Updated last year
- Data Pipeline, Models, and Benchmark for Omni-Captioner.β115Updated 3 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.β54Updated 9 months ago
- β36Updated last week
- β22Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"β32Updated 10 months ago
- A unified tokenizer that is capable of both extracting semantic information and enabling high-fidelity audio reconstruction.β131Updated 4 months ago
- β36Updated 7 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModelsβ86Updated 3 weeks ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.β36Updated 9 months ago
- β39Updated 4 months ago
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"β95Updated last year
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)β76Updated 10 months ago
- β13Updated last year
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is dβ¦β141Updated last month
- The code repo for ICASSP 2023 Paper "MMCosine: Multi-Modal Cosine Loss Towards Balanced Audio-Visual Fine-Grained Learning"β25Updated 2 years ago
- [Official Implementation] Acoustic Autoregressive Modeling π₯β74Updated last year
- Official code for DeepSound-V1β13Updated 8 months ago
- β50Updated last month
- A Foundation Model for Industrial Signal Comprehensive Representationβ57Updated 5 months ago
- Repository of the WACV'24 paper "Can CLIP Help Sound Source Localization?"β34Updated 11 months ago
- [AAAI 2026] DIFFA: Large Language Diffusion Models Can Listen and Understandβ41Updated 2 months ago
- β24Updated 4 months ago