descriptinc / lyrebird-wav2clip
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP
☆338Updated 3 years ago
Alternatives and similar repositories for lyrebird-wav2clip:
Users that are interested in lyrebird-wav2clip are comparing it to the libraries listed below
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆358Updated 8 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆218Updated 8 months ago
- AudioLDM training, finetuning, evaluation and inference.☆242Updated 3 months ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆112Updated 2 years ago
- Audio Dataset for training CLAP and other models☆673Updated last year
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆113Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆309Updated 3 years ago
- The latent diffusion model for text-to-music generation.☆166Updated last year
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆194Updated last year
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆326Updated 5 months ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆357Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆803Updated 3 years ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆180Updated last year
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆80Updated 9 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆157Updated last month
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆248Updated last year
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆71Updated 11 months ago
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆303Updated 3 months ago
- Trainer for audio-diffusion-pytorch☆128Updated 2 years ago
- An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-S…☆403Updated last year
- A lightweight library for Frechet Audio Distance calculation.☆259Updated 6 months ago
- Learning audio concepts from natural language supervision☆537Updated 6 months ago
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆52Updated 4 years ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆313Updated 11 months ago
- BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis☆227Updated 2 years ago
- MU-LLaMA: Music Understanding Large Language Model☆270Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆178Updated 9 months ago
- Official implementation of VQMIVC: One-shot (any-to-any) Voice Conversion @ Interspeech 2021 + Online playing demo!☆347Updated 2 years ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆574Updated 11 months ago