descriptinc / lyrebird-wav2clipLinks
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP
☆355Updated 3 years ago
Alternatives and similar repositories for lyrebird-wav2clip
Users that are interested in lyrebird-wav2clip are comparing it to the libraries listed below
Sorting:
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆366Updated last year
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆121Updated 2 years ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆247Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆845Updated 4 years ago
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆53Updated 4 years ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆362Updated 2 years ago
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆188Updated 7 months ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆270Updated last year
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆114Updated 2 years ago
- AudioLDM training, finetuning, evaluation and inference.☆275Updated 9 months ago
- VGGSound: A Large-scale Audio-Visual Dataset☆334Updated 4 years ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆86Updated last year
- The repo host the code and model of MAViL.☆44Updated 2 years ago
- An Audio Language model for Audio Tasks☆317Updated last year
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆359Updated last year
- The latent diffusion model for text-to-music generation.☆176Updated last year
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆339Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆196Updated last year
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆317Updated 3 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆90Updated last year
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆195Updated 2 years ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆188Updated last year
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆616Updated last year
- MU-LLaMA: Music Understanding Large Language Model☆288Updated last month
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆90Updated 3 weeks ago
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆76Updated last year
- Source code for "Sparse in Space and Time: Audio-visual Synchronisation with Trainable Selectors." (Spotlight at the BMVC 2022)☆53Updated last year
- Code for the paper "LLark: A Multimodal Instruction-Following Language Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, an…☆362Updated last year
- SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022☆116Updated 2 years ago
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆128Updated 7 months ago