descriptinc / lyrebird-wav2clipLinks
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP
☆346Updated 3 years ago
Alternatives and similar repositories for lyrebird-wav2clip
Users that are interested in lyrebird-wav2clip are comparing it to the libraries listed below
Sorting:
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆362Updated 11 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆232Updated 10 months ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆115Updated 2 years ago
- AudioLDM training, finetuning, evaluation and inference.☆253Updated 6 months ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆820Updated 3 years ago
- The latent diffusion model for text-to-music generation.☆173Updated last year
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆175Updated 3 months ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆330Updated last year
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆113Updated last year
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆184Updated last year
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆260Updated last year
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆196Updated 2 years ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆343Updated 8 months ago
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆313Updated 2 weeks ago
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆74Updated last year
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆54Updated 4 years ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆84Updated last year
- Audio Dataset for training CLAP and other models☆686Updated last year
- An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-S…☆406Updated last year
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- Learning audio concepts from natural language supervision☆564Updated 9 months ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆358Updated last year
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆593Updated last year
- Code for the paper "LLark: A Multimodal Instruction-Following Language Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, an…☆352Updated last year
- MU-LLaMA: Music Understanding Large Language Model☆277Updated last year
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".☆378Updated 3 weeks ago
- VGGSound: A Large-scale Audio-Visual Dataset☆321Updated 3 years ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆189Updated last year
- Official implementation of SawSing (ISMIR'22)☆264Updated 2 years ago
- ☆162Updated last year