yzxing87 / Seeing-and-HearingLinks
[CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
☆155Updated last year
Alternatives and similar repositories for Seeing-and-Hearing
Users that are interested in Seeing-and-Hearing are comparing it to the libraries listed below
Sorting:
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆57Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆200Updated last year
- ☆42Updated last year
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Updated 11 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆62Updated 6 months ago
- ☆59Updated last year
- This repository is for The Power of Sound(TPoS): Audio Reactive Video Generation with Stable Diffusion (ICCV2023)☆25Updated 2 years ago
- ☆112Updated 7 months ago
- ☆15Updated last month
- ☆40Updated 8 months ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆53Updated last year
- ☆61Updated 6 months ago
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆128Updated 10 months ago
- A toolkit for computing Fréchet Inception Distance (FID) & Fréchet Video Distance (FVD) metrics.☆41Updated 7 months ago
- ☆34Updated 2 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆93Updated 2 years ago
- This is the official implementation of 2024 CVPR paper "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models".☆91Updated 2 months ago
- [CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation☆450Updated last year
- ☆10Updated last month
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆256Updated last year
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆286Updated last year
- TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head Synthesis☆134Updated last month
- ☆141Updated last year
- The official implementation of OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows☆122Updated 4 months ago
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆87Updated last year
- PyTorch implementation of InstructAny2Pix: Flexible Visual Editing via Multimodal Instruction Following☆31Updated 11 months ago
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆32Updated 7 months ago
- The official UniVerse-1 code.☆116Updated 2 months ago
- ☆187Updated last year