RainBowLuoCS / OpenOmniLinks
(NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Real-Time Self-Aware Emotional Speech Synthesis
☆121Updated 2 months ago
Alternatives and similar repositories for OpenOmni
Users that are interested in OpenOmni are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 8 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆86Updated 3 weeks ago
- ☆185Updated 11 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆28Updated 4 months ago
- ☆22Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆54Updated 9 months ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆36Updated 9 months ago
- ☆39Updated 4 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆141Updated last month
- "Omni-R1: Towards the Unified Generative Paradigm for Multimodal Reasoning"☆35Updated this week
- [ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.☆25Updated 2 months ago
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆86Updated 3 weeks ago
- ☆76Updated 4 months ago
- OpenS2S : Advancing Fully Open-Source End-to-End Empathetic Large Speech Language Model☆103Updated 6 months ago
- ☆36Updated last week
- llama-omni训练代码复现☆73Updated last year
- HumanOmni☆213Updated 10 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆272Updated 11 months ago
- Synth-Empathy: Towards High-Quality Synthetic Empathy Data☆18Updated 10 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆21Updated last month
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆572Updated 2 months ago
- The first comprehensive multimodal language analysis benchmark for evaluating foundation models☆28Updated 4 months ago
- ☆77Updated 8 months ago