zlzhang1124 / AcousticFeatureExtraction
Acoustic feature extraction using Librosa library and openSMILE toolkit.使用Librosa音频处理库和openSMILE工具包,进行简单的声学特征提取
☆193Updated 4 years ago
Alternatives and similar repositories for AcousticFeatureExtraction:
Users that are interested in AcousticFeatureExtraction are comparing it to the libraries listed below
- Audio Split 基于双门限法的语音端点检测及语音分割☆132Updated 4 years ago
- 这个项目将 RAVDESS 数据集切割成 1s 短语音,利用 openSMILE+CNN 进行训练,目标是将短语音分类到四种情感中,分别是:开心(happy)、悲伤(sad)、生气(angry)和中性(neutral)。最后准确率达到 76% 左右。☆56Updated 3 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆89Updated last year
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆137Updated last year
- Speech Emotion Recognition☆27Updated 4 years ago
- 用CASIA database数据集做的,做的语音情感识别和语音识人的练习☆64Updated 2 years ago
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆145Updated 3 years ago
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆168Updated 9 months ago
- ☆104Updated 2 years ago
- This is the repository for Neural Networks project called Speech Emotion Classification Using Attention-Based LSTM☆12Updated 4 years ago
- ☆41Updated 4 years ago
- 语音感情识别☆35Updated this week
- 基于Pytorch实现的语音情感识别☆166Updated last week
- 用于机器学习的语音特征提取,包含FBank和MFCC等,原理讲解和step by step的实现☆52Updated 5 years ago
- 说话人识别(声纹识别)算法的Python实现。包括GMM(已完成)、GMM-UBM、ivector、基于深度学习的声纹识别(self-attention已完成)。☆90Updated 2 years ago
- Automatic speech emotion recognition based on transfer learning from spectrograms using ResNET☆21Updated 2 years ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆24Updated 3 years ago
- Repository for my paper: Deep Multilayer Perceptrons for Dimensional Speech Emotion Recognition☆11Updated last year
- Repository for code and paper submitted for APSIPA 2019, Lanzhou, China☆22Updated 7 months ago
- Multilingual datasets with raw audio for speech emotion recognition☆22Updated 3 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆117Updated 4 years ago
- Human emotions are one of the strongest ways of communication. Even if a person doesn’t understand a language, he or she can very well u…☆24Updated 3 years ago
- Data preparation for separation☆76Updated 3 years ago
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆313Updated 5 months ago
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆127Updated last month
- ☆16Updated 5 years ago
- ☆40Updated 2 years ago
- 语音增强☆16Updated 3 years ago
- Some useful features of speech process, such as MFCC, gammatone filterbank, GFCC, spectrum(power spectrum and log-power spectrum), Amplit…☆126Updated 4 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆38Updated 11 months ago