cosmaadrian / multimodal-depression-from-videoLinks
Official source code for the paper: "Reading Between the Frames Multi-Modal Non-Verbal Depression Detection in Videos"
☆67Updated last year
Alternatives and similar repositories for multimodal-depression-from-video
Users that are interested in multimodal-depression-from-video are comparing it to the libraries listed below
Sorting:
- Bachelor Thesis - Deep Learning-based Multi-modal Depression Estimation☆73Updated 2 years ago
- ☆25Updated last year
- ☆69Updated last year
- depression-detect Predicting depression from AVEC2014 using ResNet18.☆51Updated last year
- Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation☆26Updated 7 months ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.☆35Updated 4 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆140Updated 10 months ago
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆179Updated 2 years ago
- Official source code for the paper: "It’s Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers"☆57Updated last year
- DEP-Former: Multimodal Depression Recognition Based on Facial Expressions and Audio Features via Emotional Changes☆16Updated 9 months ago
- the baseline model of CMDC corpus☆42Updated 2 years ago
- ☆22Updated 11 months ago
- LI-FPN is an excellent model for depression recognition based on facial expression.☆16Updated last year
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆40Updated 11 months ago
- code for paper 'Spatial-Temporal Attention Network for Depression Recognition from Facial Videos'☆29Updated 6 months ago
- The final coursework for AI in Mental Health @ PKU.☆16Updated last year
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆121Updated 3 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆46Updated 2 years ago
- ☆273Updated last year
- ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6☆23Updated last year
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.…☆78Updated 3 years ago
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆62Updated 9 months ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆38Updated 7 months ago
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆63Updated last year
- ☆11Updated last year
- ☆25Updated 8 months ago
- ☆33Updated last year
- [ICASSP 2025] Official PyTorch code for training and inference pipeline for DepMamba: Progressive Fusion Mamba for Multimodal Depression …☆68Updated 4 months ago
- A demo for multi-modal emotion recognition.(多模态情感识别demo)☆89Updated last year
- Data parser for the CMU-MultimodalSDK package including parsing for CMU-MOSEI, CMU-MOSI, and POM datasets☆34Updated 11 months ago