descriptinc / lyrebird-wav2clipLinks
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP
☆347Updated 3 years ago
Alternatives and similar repositories for lyrebird-wav2clip
Users that are interested in lyrebird-wav2clip are comparing it to the libraries listed below
Sorting:
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆363Updated last year
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆116Updated 2 years ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆237Updated 11 months ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆357Updated last year
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆53Updated 4 years ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆823Updated 3 years ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆264Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆322Updated 3 years ago
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆176Updated 4 months ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆84Updated last year
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆184Updated last year
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆196Updated 2 years ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆87Updated last year
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆315Updated last month
- The repo host the code and model of MAViL.☆44Updated last year
- AudioLDM training, finetuning, evaluation and inference.☆261Updated 7 months ago
- The latent diffusion model for text-to-music generation.☆173Updated last year
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆347Updated 9 months ago
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆113Updated last year
- Source code for "Sparse in Space and Time: Audio-visual Synchronisation with Trainable Selectors." (Spotlight at the BMVC 2022)☆51Updated last year
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆75Updated last year
- Audio Dataset for training CLAP and other models☆689Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆190Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆70Updated 5 months ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆601Updated last year
- SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022☆115Updated 2 years ago
- Splits for epic-sounds dataset☆76Updated 7 months ago
- An Audio Language model for Audio Tasks☆309Updated last year
- Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection (ECCV 2022)☆65Updated last year
- Learning the Beauty in Songs: Neural Singing Voice Beautifier; ACL 2022 (Main conference); Official code☆437Updated last year