ShaheenPerveen / speech-emotion-recognition-iemocapView external linksLinks
Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as input data to ML/DL models
☆41Mar 7, 2024Updated last year
Alternatives and similar repositories for speech-emotion-recognition-iemocap
Users that are interested in speech-emotion-recognition-iemocap are comparing it to the libraries listed below
Sorting:
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆95Jul 6, 2023Updated 2 years ago
- A repository for emotion recognition from speech, text and mocap data from IEMOCAP dataset☆13Dec 12, 2018Updated 7 years ago
- Automatic speech emotion recognition based on transfer learning from spectrograms using ResNET☆27Mar 11, 2022Updated 3 years ago
- SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings☆15Jan 23, 2024Updated 2 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆171Dec 13, 2020Updated 5 years ago
- Multi-modal Human Emotion Recognition of speech clips (audio + video) contained in RAVDESS dataset using a two stream architecture☆32Mar 2, 2023Updated 2 years ago
- Speech Emotion Recognition (SER) in real-time, using Deep Neural Networks (DNN) of Long Short Memory Term (LSTM).☆116Mar 6, 2022Updated 3 years ago
- For our speech emotion recognition project☆28Mar 1, 2021Updated 4 years ago
- We present a study of a neural network based method for speech emotion recognition, using audio-only features. In the studied scheme, the…☆11Jul 24, 2024Updated last year
- Depression-Detection represents a machine learning algorithm to classify audio using acoustic features in human speech, thus detecting de…☆14Jul 10, 2020Updated 5 years ago
- ☆10Aug 16, 2024Updated last year
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆442Dec 21, 2023Updated 2 years ago
- Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis☆13Mar 17, 2021Updated 4 years ago
- Multimodal Affective Analysis Using Hierarchical Attention Strategy☆12Dec 7, 2018Updated 7 years ago
- A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data☆12May 16, 2022Updated 3 years ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆164Nov 27, 2023Updated 2 years ago
- Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"☆10Dec 19, 2021Updated 4 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆52Sep 14, 2021Updated 4 years ago
- Multimodal preprocessing on IEMOCAP dataset☆13Jun 8, 2018Updated 7 years ago
- ☆14Aug 24, 2018Updated 7 years ago
- The official implementation of the method discussed in the paper Improving Spoken Language Identification with Map-Mix(work accepted at I…☆18Feb 17, 2023Updated 3 years ago
- Repository for my paper: Deep Multilayer Perceptrons for Dimensional Speech Emotion Recognition☆11Oct 24, 2023Updated 2 years ago
- Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transform…☆264Nov 6, 2020Updated 5 years ago
- ☆18Feb 25, 2023Updated 2 years ago
- Chinese BERT classification with tf2.0 and audio classification with mfcc☆14Dec 2, 2020Updated 5 years ago
- [ICASSP 2023] Mingling or Misalignment? Temporal Shift for Speech Emotion Recognition with Pre-trained Representations☆40Dec 18, 2023Updated 2 years ago
- An in-depth analysis of audio classification on the RAVDESS dataset. Feature engineering, hyperparameter optimization, model evaluation, …☆79Nov 5, 2020Updated 5 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆119Feb 26, 2021Updated 4 years ago
- This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech.☆211Nov 10, 2022Updated 3 years ago
- Detecting emotions using MFCC features of human speech using Deep Learning☆133Dec 2, 2020Updated 5 years ago
- A multimodal approach on emotion recognition using audio and text.☆188Jun 15, 2020Updated 5 years ago
- Human emotions are one of the strongest ways of communication. Even if a person doesn’t understand a language, he or she can very well u…☆25Jun 23, 2021Updated 4 years ago
- Advances in audio anti-spoofing and deepfake detection using graph neural networks and self-supervised learning☆23Aug 20, 2023Updated 2 years ago
- SpeechFormer++ in PyTorch☆50Jul 21, 2023Updated 2 years ago
- SylNet: An Adaptable End-to-End Syllable Count Estimator for Speech☆27May 25, 2023Updated 2 years ago
- ☆30May 7, 2024Updated last year
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Dec 20, 2020Updated 5 years ago
- Curated List of NLP tutorials☆30Feb 27, 2025Updated 11 months ago
- Official implementation of our ASVspoof 2021 paper, "UR Channel-Robust Synthetic Speech Detection System for ASVspoof 2021"☆56Feb 15, 2022Updated 4 years ago