v-iashin / SpecVQGAN
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
☆358Updated 8 months ago
Alternatives and similar repositories for SpecVQGAN:
Users that are interested in SpecVQGAN are comparing it to the libraries listed below
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆337Updated 3 years ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆112Updated 2 years ago
- AudioLDM training, finetuning, evaluation and inference.☆239Updated 3 months ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆194Updated last year
- The latent diffusion model for text-to-music generation.☆166Updated last year
- Official PyTorch implementation of the TIP paper "Generating Visually Aligned Sound from Videos" and the corresponding Visually Aligned S…☆52Updated 4 years ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆178Updated 11 months ago
- A lightweight library for Frechet Audio Distance calculation.☆257Updated 6 months ago
- ☆157Updated last year
- Trainer for audio-diffusion-pytorch☆128Updated 2 years ago
- Toward Universal Text-to-Music-Retrieval (TTMR) [ICASSP23]☆113Updated last year
- A toolbox that provides hackable building blocks for generic 1D/2D/3D UNets, in PyTorch.☆85Updated last year
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆71Updated 11 months ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆357Updated last year
- A simple library for Fréchet Audio Distance (FAD) calculation☆184Updated 2 weeks ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆326Updated 5 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆218Updated 7 months ago
- Official implementation of SawSing (ISMIR'22)☆257Updated 2 years ago
- Source code for "FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control"☆150Updated 5 months ago
- Encode and decode audio samples to/from compressed latent representations!☆185Updated 3 weeks ago
- Symbolic Music Generation with Diffusion Models☆240Updated last week
- Audio Dataset for training CLAP and other models☆669Updated last year
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆157Updated 3 weeks ago
- BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis☆227Updated 2 years ago
- A collection of audio autoencoders, in PyTorch.☆40Updated 2 years ago
- PyTorch implementation of MuseMorphose (published at IEEE/ACM TASLP), a Transformer-based model for music style transfer.☆179Updated 2 years ago
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆194Updated last week
- The official code repo for "Zero-shot Audio Source Separation through Query-based Learning from Weakly-labeled Data", in AAAI 2022☆195Updated 2 years ago
- Official Implementation of "Multitrack Music Transformer" (ICASSP 2023)☆143Updated last year