Ceaglex / LoVALinks
The code and weight for LoVA. LoVA is a novel model for Long-form Video-to-Audio generation. Based on the Diffusion Transformer (DiT) architecture, LoVA proves to be more effective at generating long-form audio compared to existing autoregressive models and UNet-based diffusion models.
☆16Updated 7 months ago
Alternatives and similar repositories for LoVA
Users that are interested in LoVA are comparing it to the libraries listed below
Sorting:
- ☆110Updated 3 weeks ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 6 months ago
- AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension☆117Updated 9 months ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆26Updated 5 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆56Updated 4 months ago
- An easy-to-use, fast, and easily integrable tool for evaluating audio LLM☆144Updated this week
- [NeurIPS 2025] Benchmark data and code for MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix☆167Updated 3 months ago
- LUCY: Linguistic Understanding and Control Yielding Early Stage of Her☆55Updated 5 months ago
- A curated list of Video to Audio Generation☆72Updated 3 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆109Updated 4 months ago
- ☆41Updated last year
- ☆44Updated last week
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆80Updated 3 months ago
- BLSP-Emo: Towards Empathetic Large Speech-Language Models☆49Updated last year
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆99Updated this week
- ☆19Updated 3 months ago
- ☆144Updated last month
- Code for NeurIPS 2023 paper "DASpeech: Directed Acyclic Transformer for Fast and High-quality Speech-to-Speech Translation".☆63Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆29Updated 6 months ago
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systems☆83Updated last year
- 🤗 R1-AQA Model: mispeech/r1-aqa☆299Updated 5 months ago
- VehicleWorld is the first comprehensive multi-device environment for intelligent vehicle interaction that accurately models the complex, …☆13Updated last week
- The dataset and baseline code for Text-to-Audio Grounding (TAG)☆45Updated 2 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆247Updated last year
- The open source code for LLM-Codec