Jackustc / Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset
☆29Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆138Updated last year
- Source code for the paper "Text-based Depression Detection: What Triggers An Alert"☆45Updated last year
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆26Updated 3 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆76Updated 2 years ago
- ☆50Updated 9 months ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆45Updated 3 years ago
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆67Updated 3 years ago
- the baseline model of CMDC corpus☆33Updated 2 years ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆11Updated 4 years ago
- ☆19Updated 3 months ago
- Detecting depression in a conversation using Convolutional Neral Network☆65Updated 3 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆85Updated last year
- A survey of deep multimodal emotion recognition.☆51Updated 2 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆114Updated 3 years ago
- scripts to model depression in speech and text☆70Updated 4 years ago
- Baseline scripts for AVEC 2019, Depression Detection Sub-challenge☆15Updated 5 years ago
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆24Updated 5 years ago
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆10Updated last year
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆161Updated 3 years ago
- Depression-Detection represents a machine learning algorithm to classify audio using acoustic features in human speech, thus detecting de…☆14Updated 4 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆36Updated 8 months ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆96Updated last year
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆38Updated 2 years ago
- Human Emotion Understanding using multimodal dataset.☆83Updated 4 years ago
- ☆26Updated 2 years ago
- Detect Depression with AI Sub-challenge (DSS) of AVEC2019 experienment version via YZK☆13Updated 3 years ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆23Updated 3 years ago
- ☆10Updated last year