Veleslavia / conditioned-u-net
Conditioned U-Net for Music Source Separation
☆20Updated 3 years ago
Alternatives and similar repositories for conditioned-u-net:
Users that are interested in conditioned-u-net are comparing it to the libraries listed below
- ICASSP 2022☆61Updated 3 years ago
- NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling☆37Updated 3 years ago
- Unsupervised Representation Learning for Singing Voice Separation☆22Updated 2 years ago
- ☆83Updated last year
- Deep Performer: Score-to-audio music performance synthesis☆43Updated last year
- A unified model for zero-shot singing voice conversion and synthesis☆22Updated 2 years ago
- Learning and controlling the source-filter representation of speech with a variational autoencoder☆45Updated 2 years ago
- A PyTorch implementation of the paper: "AMSS-Net: Audio Manipulation on User-Specified Sources with Textual Queries" (ACM Multimedia 2021…☆21Updated 3 years ago
- ☆32Updated 4 years ago
- PyTorch Dataset for Speech and Music audio☆74Updated 9 months ago
- Implementation for "Music Enhancement via Image Translation and Vocoding"☆54Updated 2 years ago
- A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"☆33Updated 3 years ago
- An evaluation toolkit for voice conversion models.☆42Updated 3 years ago
- Project for MIDI to Audio Synthesis☆23Updated 2 years ago
- Rough implementation of Simultaneous Separation and Transcription of Mixtures with Multiple Polyphonic and Percussive Instruments (Ethan …☆24Updated 4 years ago
- UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation☆74Updated 3 years ago
- An invertible and differentiable implementation of the Constant-Q Transform (CQT).☆60Updated 2 years ago
- An unofficial implementation of https://arxiv.org/abs/2005.05106☆46Updated 4 years ago
- Code for the paper "MULTI-BAND MASKING FOR WAVEFORM-BASED SINGING VOICE SEPARATION" that was accepted on EUSIPCO2022☆15Updated 2 years ago
- [ismir2019] Learning a Joint Embedding Space of Monophonic and Mixed Music Signals for Singing Voice☆28Updated 2 years ago
- ☆40Updated 4 years ago
- Simple baseline model for the HEAR benchmark☆23Updated last month
- Deep Speech Distances PyTorch☆28Updated 3 years ago
- An implementation of "Towards Improving Harmonic Sensitivity and Prediction Stability for Singing Melody Extraction", in ISMIR 2023☆23Updated last year
- Who calls the shots? Rethinking Few-Shot Learning for Audio (WASPAA 2021)☆42Updated 2 years ago
- Reproducible Subjective Evaluation☆59Updated last year
- ☆24Updated 3 years ago
- spectrogram inversion tools in PyTorch. Documentation: https://spectrogram-inversion.readthedocs.io☆49Updated last year
- PyTorch implementation of the ICASSP-24 paper: "Improving Audio Captioning Models with Fine-grained Audio Features, Text Embedding Superv…☆36Updated last year
- This repository contains laughter-related synthesis systems.☆13Updated 4 years ago