SenseTime-FVG / InteractiveOmniLinks
☆22Updated 2 months ago
Alternatives and similar repositories for InteractiveOmni
Users that are interested in InteractiveOmni are comparing it to the libraries listed below
Sorting:
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆63Updated 7 months ago
- ☆38Updated 2 weeks ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆57Updated last year
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆200Updated last year
- ☆62Updated 7 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆123Updated 2 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- ☆114Updated 7 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆575Updated 3 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Updated 11 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆28Updated 4 months ago
- A curated list of Vision (video/image) to Audio Generation☆96Updated 2 months ago
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆27Updated 2 years ago
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆106Updated 4 months ago
- ☆141Updated last year
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆86Updated last month
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆73Updated 8 months ago
- ☆185Updated 11 months ago
- ☆77Updated 9 months ago
- The official implementation of OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows☆122Updated 5 months ago
- ☆19Updated 5 months ago
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆34Updated 8 months ago
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆188Updated last year
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆53Updated last year
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆36Updated 9 months ago
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆32Updated 11 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆118Updated 8 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆146Updated last week