luosiallen / Diff-FoleyLinks
Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models
☆200Updated last year
Alternatives and similar repositories for Diff-Foley
Users that are interested in Diff-Foley are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆93Updated 2 years ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆190Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆101Updated 4 months ago
- ☆113Updated 7 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆57Updated last year
- ☆61Updated 7 months ago
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆117Updated 7 months ago
- ☆59Updated last year
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆188Updated last year
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆129Updated 11 months ago
- AudioLDM training, finetuning, evaluation and inference.☆290Updated last year
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025) (Oral)☆32Updated last year
- ☆187Updated last month
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆59Updated 9 months ago
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆77Updated last year
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆27Updated 2 years ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆124Updated 2 years ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆87Updated last year
- Official PyTorch implementation of ReWaS (AAAI'25) "Read, Watch and Scream! Sound Generation from Text and Video"☆43Updated last year
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆62Updated 6 months ago
- to release the source code for reproducing the results reported in our paper: https://arxiv.org/abs/2409.17550☆14Updated last year
- MU-LLaMA: Music Understanding Large Language Model☆299Updated 4 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆253Updated last year
- ☆47Updated 9 months ago
- [ICML2023] Long-Term Rhythmic Video Soundtracker☆61Updated 5 months ago
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆33Updated 7 months ago
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆25Updated last year
- [NeurIPS 2024] Code, Dataset, Samples for the VATT paper “ Tell What You Hear From What You See - Video to Audio Generation Through Text”☆34Updated 5 months ago