adbailey1 / daic_woz_process
☆50Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for daic_woz_process
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆66Updated 3 years ago
- ☆16Updated last week
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆25Updated 3 years ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆11Updated 4 years ago
- ☆19Updated 3 months ago
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆136Updated last year
- Papers using E-DAIC dataset (AVEC 2019 DDS)☆24Updated last year
- ☆10Updated 11 months ago
- the baseline model of CMDC corpus☆31Updated 2 years ago
- Bachelor Thesis - Deep Learning-based Multi-modal Depression Estimation☆53Updated last year
- Detect Depression with AI Sub-challenge (DSS) of AVEC2019 experienment version via YZK☆13Updated 3 years ago
- depression-detect Predicting depression from AVEC2014 using ResNet18.☆39Updated 4 months ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆44Updated 3 years ago
- Depression-Detection represents a machine learning algorithm to classify audio using acoustic features in human speech, thus detecting de…☆14Updated 4 years ago
- A survey of deep multimodal emotion recognition.☆51Updated 2 years ago
- 多模态,语音和文本结合的情感识别,大模型finetune☆13Updated 11 months ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆23Updated 3 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆31Updated 9 months ago
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆24Updated 5 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆65Updated 7 months ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆125Updated 11 months ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆76Updated 2 years ago
- Repository for my paper: Dimensional Speech Emotion Recognition Using Acoustic Features and Word Embeddings using Multitask Learning☆16Updated 3 months ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 2 years ago
- Detecting depression in a conversation using Convolutional Neral Network☆64Updated 3 years ago
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆10Updated last year
- Official source code for the paper: "Reading Between the Frames Multi-Modal Non-Verbal Depression Detection in Videos"☆43Updated 5 months ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆36Updated 8 months ago
- SpeechFormer++ in PyTorch☆41Updated last year