yannqi / Draw-an-Audio-Code
Official code of the paper: Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis.
☆45Updated 7 months ago
Alternatives and similar repositories for Draw-an-Audio-Code:
Users that are interested in Draw-an-Audio-Code are comparing it to the libraries listed below
- ☆55Updated 9 months ago
- Music production for silent film clips.☆21Updated this week
- ☆20Updated last year
- ☆65Updated last month
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆162Updated 11 months ago
- ☆221Updated last month
- Anim-400K: A dataset designed from the ground up for automated dubbing of video☆104Updated 10 months ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆48Updated 4 months ago
- An open source community implementation of the model from the paper: "Movie Gen: A Cast of Media Foundation Models". Join our community …☆60Updated last week
- ☆11Updated last month
- ☆75Updated last year
- An official implementation of SwapAnyone.☆59Updated last month
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆182Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆184Updated 11 months ago
- ☆79Updated 2 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆95Updated 6 months ago
- The official implementation of OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows☆57Updated last month
- This repository contains the code and data for the paper EmoKnob: Enhance Voice Cloning with Fine-Grained Emotion Control by Haozhe Chen,…☆70Updated 7 months ago
- ☆46Updated 5 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆48Updated 7 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- ☆20Updated 2 months ago
- ☆165Updated 4 months ago
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆21Updated 7 months ago
- Paper: "From Text to Pose to Image: Improving Diffusion Model Control and Quality"☆46Updated 5 months ago
- ☆63Updated last year
- FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation☆56Updated last week
- Blending Custom Photos with Video Diffusion Transformers☆46Updated 3 months ago
- Official implementation of "JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical Spatio-Temporal Prior Synchronization"☆49Updated 3 weeks ago
- Public code release for the paper "ProCreate, Don’t Reproduce! Propulsive Energy Diffusion for Creative Generation"☆37Updated 5 months ago