pc2752 / ss_synthesis
☆18Updated 5 years ago
Related projects: ⓘ
- GlottDNN vocoder and tools for training DNN excitation models☆32Updated 3 years ago
- Pitch estimation network (PiENet) for noise-robust neural F0 estimation of speech signals☆50Updated 5 years ago
- DNN based singing voice synthesis☆17Updated 5 years ago
- Implementation of CREPE Pitch tracker with PyTorch☆19Updated 4 years ago
- Unsupervised Representation Learning for Singing Voice Separation☆21Updated last year
- PyTorch implementation for Deep Griffin-Lim Iteration paper(https://arxiv.org/abs/1903.03971)☆36Updated 4 years ago
- Multiple Fundamental Frequency Estimation☆26Updated 10 years ago
- ☆15Updated last year
- ☆22Updated last year
- ☆16Updated 5 years ago
- ☆39Updated 4 years ago
- ☆18Updated 4 years ago
- Addressing the confounds of accompaniments in singer identification☆18Updated 4 years ago
- ☆11Updated 3 years ago
- Speech enhancement using mimic loss☆15Updated 4 years ago
- J-Net is aimed for audio separation with randomly weighted encoder.☆10Updated 4 years ago
- Repository for ISMIR 2022 tutorial T3(M): Designing Controllable Synthesis System for Musical Signals☆27Updated last year
- A python implementation of the Griffin Lim Algorithm for audio reconstruction from magnitudes☆32Updated 8 months ago
- ☆26Updated 3 years ago
- Simple baseline model for the HEAR benchmark☆22Updated last month
- A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"☆33Updated 2 years ago
- [ismir2019] Learning a Joint Embedding Space of Monophonic and Mixed Music Signals for Singing Voice☆27Updated last year
- Multitrack Analysis/SynthesiS for Annotation, auGmentation and Evaluation☆21Updated 6 years ago
- Backpropagable pytorch implementation of https://craffel.github.io/mir_eval/.☆35Updated 2 months ago
- ☆34Updated 5 years ago
- Code for the paper "MULTI-BAND MASKING FOR WAVEFORM-BASED SINGING VOICE SEPARATION" that was accepted on EUSIPCO2022☆15Updated 2 years ago
- This is a subset of the DALI set consisting of 240 polyphonic recordings that is used to benchmark lyrics transcription evaluation.☆12Updated 2 years ago
- A pretrained model for "A Phoneme-informed Neural Network Model for Note-level Singing Transcription", ICASSP 2023☆24Updated last year
- ☆23Updated 5 years ago
- Audio samples for the paper "TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids"☆39Updated 4 years ago