matthewvowels1 / Awesome_ML_for_mental_healthLinks
A curated list of awesome work on machine learning for mental health applications. Includes topics broadly captured by affective computing. Facial expressions, speech analysis, emotion prediction, depression, interactions, psychiatry etc. etc.
β124Updated 5 years ago
Alternatives and similar repositories for Awesome_ML_for_mental_health
Users that are interested in Awesome_ML_for_mental_health are comparing it to the libraries listed below
Sorting:
- Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20β40Updated 3 years ago
- A curated list of awesome affective computing π€β€οΈ papers, software, open-source projects, and resourcesβ182Updated 6 years ago
- Predicting depression from acoustic features of speech using a Convolutional Neural Network.β316Updated 7 years ago
- Classifying Audio to Emotionβ28Updated 6 years ago
- scripts to model depression in speech and textβ74Updated 5 years ago
- Source code for the paper "Text-based Depression Detection: What Triggers An Alert"β50Updated 2 years ago
- Detecting depression in a conversation using Convolutional Neral Networkβ74Updated 4 years ago
- Understanding emotions from audio files using neural networks and multiple datasets.β425Updated 2 years ago
- The first asian machine learning in Jeju Island, South Korea - Projectβ74Updated 5 years ago
- Using Convolutional Neural Networks in speech emotion recognition on the RAVDESS Audio Dataset.β143Updated 4 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18β299Updated last year
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)β435Updated 2 years ago
- Depression Detection from Speechβ35Updated 8 years ago
- Deception and emotion detection via audio and video.β38Updated 6 years ago
- Time series course Fall 2019 projectβ53Updated 5 years ago
- Speaker independent emotion recognitionβ327Updated last year
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.β169Updated 5 years ago
- β90Updated 3 years ago
- Voice stress analysis (VSA) aims to differentiate between stressed and non-stressed outputs in response to stimuli (e.g., questions posedβ¦β97Updated 4 years ago
- Detecting depressed Patient based on Speech Activity, Pauses in Speech and Using Deep learning Approachβ20Updated 2 years ago
- Human Emotion Understanding using multimodal dataset.β108Updated 5 years ago
- Emotions recognition from audio signal using OpenSmile, PCA and set of classifiers from Scikit-learn libraryβ47Updated 3 years ago
- Official source code for the paper: "Itβs Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers"β59Updated last year
- Baseline scripts for the Audio/Visual Emotion Challenge 2019β80Updated 3 years ago
- β34Updated 3 years ago
- Speech-based diagnosis of depressionβ28Updated 4 years ago
- Supplementary codes for the K-EmoCon datasetβ28Updated 4 years ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.β41Updated 5 years ago
- MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversationβ991Updated last year
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.β¦β84Updated 4 years ago