InternLM / StarBenchLinks
☆38Updated 2 weeks ago
Alternatives and similar repositories for StarBench
Users that are interested in StarBench are comparing it to the libraries listed below
Sorting:
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆63Updated 7 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆28Updated 4 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆57Updated last year
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆200Updated last year
- [ICLR 2026] Data Pipeline, Models, and Benchmark for Omni-Captioner.☆116Updated 3 months ago
- A curated list of Vision (video/image) to Audio Generation☆96Updated 2 months ago
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆86Updated last month
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆106Updated 4 months ago
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025) (Oral)☆32Updated last year
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆34Updated 8 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆146Updated last week
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- [NeurIPS 2024] Code, Dataset, Samples for the VATT paper “ Tell What You Hear From What You See - Video to Audio Generation Through Text”☆35Updated 6 months ago
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆32Updated 11 months ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆36Updated 10 months ago
- Ego4DSounds: A diverse egocentric dataset with high action-audio correspondence☆19Updated last year
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆73Updated 8 months ago
- ☆62Updated 7 months ago
- [ISMIR 2025] A curated list of vision-to-music generation: methods, datasets, evaluation and challenges.☆118Updated 5 months ago
- [ICCV2025] TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation. https://yuqingwang1029.github.io/To…☆151Updated 6 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Updated last year
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆27Updated 2 years ago
- "Omni-R1: Towards the Unified Generative Paradigm for Multimodal Reasoning"☆45Updated last week
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆123Updated 2 months ago
- [Official Implementation] Acoustic Autoregressive Modeling 🔥☆74Updated last year
- Official PyTorch implementation of ReWaS (AAAI'25) "Read, Watch and Scream! Sound Generation from Text and Video"☆43Updated last year
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆118Updated 8 months ago
- ☆40Updated 10 months ago