descriptinc / lyrebird-wav2clipLinks
Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP
☆356Updated 3 years ago
Alternatives and similar repositories for lyrebird-wav2clip
Users that are interested in lyrebird-wav2clip are comparing it to the libraries listed below
Sorting:
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆366Updated last year
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆119Updated 2 years ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆361Updated 2 years ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆246Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆843Updated 3 years ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆269Updated last year
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆186Updated 6 months ago
- The repo host the code and model of MAViL.☆44Updated 2 years ago
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆53Updated 4 years ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆85Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆328Updated 4 years ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆89Updated last year
- AudioLDM training, finetuning, evaluation and inference.☆274Updated 9 months ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆358Updated 11 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆195Updated last year
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆318Updated 3 months ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆195Updated 2 years ago
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆113Updated 2 years ago
- Source code for "Sparse in Space and Time: Audio-visual Synchronisation with Trainable Selectors." (Spotlight at the BMVC 2022)☆51Updated last year
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆187Updated last year
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆338Updated last year
- The latent diffusion model for text-to-music generation.☆174Updated last year
- An Audio Language model for Audio Tasks☆315Updated last year
- Splits for epic-sounds dataset☆82Updated last month
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆86Updated 7 months ago
- Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection (ECCV 2022)☆65Updated last year
- SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022☆116Updated 2 years ago
- Audio Dataset for training CLAP and other models☆709Updated last year
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆77Updated last year
- MU-LLaMA: Music Understanding Large Language Model☆287Updated 3 weeks ago