Jackustc / Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset
☆31Updated 2 years ago
Alternatives and similar repositories for Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset:
Users that are interested in Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset are comparing it to the libraries listed below
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆148Updated last year
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆77Updated 2 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆116Updated 3 years ago
- ☆52Updated 10 months ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆49Updated 3 years ago
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆70Updated 3 years ago
- the baseline model of CMDC corpus☆36Updated 2 years ago
- ☆20Updated 5 months ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆86Updated last year
- Human Emotion Understanding using multimodal dataset.☆91Updated 4 years ago
- A multimodal approach on emotion recognition using audio and text.☆169Updated 4 years ago
- scripts to model depression in speech and text☆70Updated 4 years ago
- ☆16Updated last year
- Source code for the paper "Text-based Depression Detection: What Triggers An Alert"☆45Updated last year
- Baseline scripts for AVEC 2019, Depression Detection Sub-challenge☆15Updated 5 years ago
- Detecting depression in a conversation using Convolutional Neral Network☆66Updated 3 years ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆30Updated 4 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆37Updated 10 months ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆39Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆53Updated 2 years ago
- Implementation of the paper "Emotion Identification from raw speech signals using DNNs"☆14Updated 4 years ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆11Updated 4 years ago
- Emotion recognition from IEMOCAP datasets.☆26Updated 4 years ago
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆10Updated last year
- Papers using E-DAIC dataset (AVEC 2019 DDS)☆27Updated last year
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆120Updated 2 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆162Updated 4 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆98Updated last year
- This is a short tutorial for using the CMU-MultimodalSDK.☆81Updated 5 years ago