yannqi / Draw-an-Audio-CodeLinks
Official code of the paper: Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis.
☆46Updated 10 months ago
Alternatives and similar repositories for Draw-an-Audio-Code
Users that are interested in Draw-an-Audio-Code are comparing it to the libraries listed below
Sorting:
- Music production for silent film clips.☆26Updated 2 months ago
- ☆20Updated last year
- ☆61Updated last month
- ☆21Updated 4 months ago
- ☆74Updated 2 months ago
- [ICML 2025] SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation☆245Updated last week
- An official implementation of SwapAnyone.☆63Updated 4 months ago
- ☆81Updated last month
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆167Updated last year
- Anim-400K: A dataset designed from the ground up for automated dubbing of video☆108Updated last year
- ☆75Updated last year
- Towards Fine-grained Audio Captioning with Multimodal Contextual Cues☆75Updated last month
- Official Repo for MoCha Towards Movie-Grade Talking Character Synthesis☆38Updated last month
- MTVCraft: An Open Veo3-style Audio-Video Generation Demo☆37Updated last week
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆184Updated last year
- ☆45Updated 7 months ago
- An open source community implementation of the model from the paper: "Movie Gen: A Cast of Media Foundation Models". Join our community …☆60Updated last week
- ☆178Updated 6 months ago
- Awesome music generation model——MG²☆159Updated 3 months ago
- Official implementation of the paper "Compressed Image Generation with Denoising Diffusion Codebook Models"☆56Updated 5 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆52Updated 2 weeks ago
- Official implementation of MagicFace: Training-free Universal-Style Human Image Customized Synthesis.☆63Updated 6 months ago
- This repository contains the code and data for the paper EmoKnob: Enhance Voice Cloning with Fine-Grained Emotion Control by Haozhe Chen,…☆74Updated 9 months ago
- LLIA - Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models☆79Updated last month
- Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis (ICCV, 2025)☆52Updated 2 weeks ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆50Updated 7 months ago
- ☆78Updated 8 months ago
- [ACM MM24] Official implementation of ACM MM 2024 paper: "ZePo: Zero-Shot Portrait Stylization with Faster Sampling"☆41Updated 10 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆190Updated last year
- ☆16Updated 3 months ago