adbailey1 / DepAudioNet_reproduction
Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.org/doi/10.1145/2988257.2988267), AVEC 2016}
☆70Updated 3 years ago
Alternatives and similar repositories for DepAudioNet_reproduction:
Users that are interested in DepAudioNet_reproduction are comparing it to the libraries listed below
- ☆53Updated 11 months ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆11Updated 4 years ago
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆148Updated last year
- ☆10Updated last year
- ☆20Updated 5 months ago
- ☆19Updated 2 months ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆30Updated 4 years ago
- the baseline model of CMDC corpus☆36Updated 2 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆77Updated 2 years ago
- Repository for my paper: Dimensional Speech Emotion Recognition Using Acoustic Features and Word Embeddings using Multitask Learning☆16Updated 5 months ago
- SpeechFormer++ in PyTorch☆45Updated last year
- Bachelor Thesis - Deep Learning-based Multi-modal Depression Estimation☆62Updated last year
- Detecting depression in a conversation using Convolutional Neral Network☆66Updated 3 years ago
- Papers using E-DAIC dataset (AVEC 2019 DDS)☆27Updated last year
- depression-detect Predicting depression from AVEC2014 using ResNet18.☆42Updated 7 months ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆24Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆53Updated 2 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆51Updated 3 years ago
- ☆41Updated 4 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as …☆38Updated 10 months ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆116Updated 3 years ago
- Detect Depression with AI Sub-challenge (DSS) of AVEC2019 experienment version via YZK☆13Updated 3 years ago
- ☆12Updated 4 years ago
- Depression-Detection represents a machine learning algorithm to classify audio using acoustic features in human speech, thus detecting de…☆14Updated 4 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆87Updated last year
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆46Updated 2 years ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 2 years ago
- ☆47Updated last year
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆35Updated 2 months ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆116Updated 3 years ago