guyyariv / TempoTokens
This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation
☆99Updated 4 months ago
Related projects: ⓘ
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆148Updated 3 months ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆141Updated 5 months ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆75Updated 3 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆69Updated 9 months ago
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆113Updated 2 months ago
- Long-Term Rhythmic Video Soundtracker, ICML2023☆54Updated 2 months ago
- ☆130Updated last week
- ☆40Updated 2 months ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆102Updated last year
- official code for CVPR'24 paper Diff-BGM☆38Updated 5 months ago
- ☆74Updated 8 months ago
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆79Updated 2 months ago
- Anim-400K: A dataset designed from the ground up for automated dubbing of video☆97Updated 3 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆68Updated 2 months ago
- a text-conditional diffusion probabilistic model capable of generating high fidelity audio.☆118Updated 3 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆23Updated last week
- AudioLDM training, finetuning, evaluation and inference.☆191Updated 3 months ago
- CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model☆177Updated 4 months ago
- Efficient synchronization from sparse cues☆25Updated 4 months ago
- Refactored / updated version of `stable-audio-tools` which is an open-source code for audio/music generative models originally by Stabili…☆111Updated last month
- Unsupervised Rhythm Modeling for Voice Conversion☆78Updated last year
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆140Updated last month
- The latent diffusion model for text-to-music generation.☆151Updated 7 months ago
- VoiceLDM: Text-to-Speech with Environmental Context☆157Updated last month
- ☆23Updated last month
- PyTorch implementation of Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities.☆169Updated last month
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆140Updated last month
- ☆24Updated 2 weeks ago
- [Interspeech 2024] Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation☆64Updated last month
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆13Updated 9 months ago