HLTSingapore / Emotional-Speech-DataLinks
This is the GitHub page for publicly available emotional speech data.
☆357Updated 3 years ago
Alternatives and similar repositories for Emotional-Speech-Data
Users that are interested in Emotional-Speech-Data are comparing it to the libraries listed below
Sorting:
- A Survey on Neural Speech Synthesis https://arxiv.org/pdf/2106.15561.pdf☆369Updated 3 years ago
- PPG-Based Voice Conversion☆341Updated 2 years ago
- An official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-S…☆407Updated last year
- Charsiu: A neural phonetic aligner.☆307Updated 2 years ago
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆348Updated 9 months ago
- UniSpeech - Large Scale Self-Supervised Learning for Speech☆464Updated last year
- A PyTorch implementation of Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis☆368Updated 2 years ago
- Implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"☆371Updated 11 months ago
- PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised T…☆194Updated 2 years ago
- Speaker embedding (d-vector) trained with GE2E loss☆282Updated last year
- Official implementation of VQMIVC: One-shot (any-to-any) Voice Conversion @ Interspeech 2021 + Online playing demo!☆351Updated 3 years ago
- Foreign Accent Conversion by Synthesizing Speech from Phonetic Posteriorgrams (Interspeech'19)☆143Updated 2 years ago
- PyTorch Implementation of Non-autoregressive Expressive (emotional, conversational) TTS based on FastSpeech2, supporting English, Korean,…☆303Updated 3 years ago
- ☆120Updated 2 years ago
- The Emotional Voices Database: Towards Controlling the Emotional Expressiveness in Voice Generation Systems☆271Updated last year
- Paper, Code and Statistics for Self-Supervised Learning and Pre-Training on Speech.☆206Updated last year
- A Non-Autoregressive Transformer based Text-to-Speech, supporting a family of SOTA transformers with supervised and unsupervised duration…☆326Updated 2 years ago
- Mel cepstral distortion (MCD) computations in python.☆224Updated 8 years ago
- This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.☆590Updated last year
- Any-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention☆202Updated 4 years ago
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆134Updated 6 months ago
- UT-Sarulab MOS prediction system using SSL models☆248Updated last year
- Deep speaker embeddings in PyTorch, including x-vectors. Code used in this work: https://arxiv.org/abs/2007.16196☆317Updated 4 years ago
- Collection of pretrained models for the Montreal Forced Aligner☆156Updated 3 weeks ago
- Official implementation of Meta-StyleSpeech and StyleSpeech☆249Updated 3 years ago
- ☆192Updated last year
- Implementation code of non-parallel sequence-to-sequence VC☆248Updated 2 years ago
- Research code for the paper "Fine-tuning wav2vec2 for speaker recognition" found at https://arxiv.org/abs/2109.15053☆145Updated 3 years ago
- Phoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three differen…☆234Updated 3 years ago
- see README☆351Updated 11 months ago