Jackustc / Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset
☆32Updated 2 years ago
Alternatives and similar repositories for Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset:
Users that are interested in Question-Level-Feature-Extraction-on-DAIC-WOZ-dataset are comparing it to the libraries listed below
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆79Updated 3 years ago
- ☆61Updated last year
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆76Updated 3 years ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆33Updated 4 years ago
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆174Updated last year
- the baseline model of CMDC corpus☆40Updated 2 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆53Updated 3 years ago
- ☆20Updated 9 months ago
- Source code for the paper "Text-based Depression Detection: What Triggers An Alert"☆48Updated last year
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆89Updated last year
- Baseline scripts for AVEC 2019, Depression Detection Sub-challenge☆15Updated 5 years ago
- Official source code for the paper: "It’s Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers"☆54Updated last year
- scripts to model depression in speech and text☆71Updated 5 years ago
- Human Emotion Understanding using multimodal dataset.☆97Updated 4 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆120Updated 4 years ago
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆10Updated 3 months ago
- Detecting depression in a conversation using Convolutional Neral Network☆70Updated 4 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆39Updated last year
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆162Updated 4 years ago
- Papers using E-DAIC dataset (AVEC 2019 DDS)☆31Updated 2 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆103Updated 2 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆121Updated 4 years ago
- A survey of deep multimodal emotion recognition.☆52Updated 3 years ago
- A multimodal approach on emotion recognition using audio and text.☆175Updated 4 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆280Updated 10 months ago
- Depression-Detection represents a machine learning algorithm to classify audio using acoustic features in human speech, thus detecting de…☆14Updated 4 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆64Updated 4 years ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆12Updated 4 years ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆24Updated 4 years ago
- ☆27Updated 3 years ago