asfathermou / human-computer-interactionLinks
国科大人机交互大作业:多模态情感识别
☆124Updated 3 years ago
Alternatives and similar repositories for human-computer-interaction
Users that are interested in human-computer-interaction are comparing it to the libraries listed below
Sorting:
- A demo for multi-modal emotion recognition.(多模态情感识别demo)☆89Updated last year
- 多模态融合情感分析☆131Updated 5 years ago
- 多模态融合情感分析☆35Updated 4 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆140Updated 9 months ago
- 该仓库主要描述了CCAC2023多模态对话情绪识别评测第3名的实现过程☆11Updated 10 months ago
- the baseline model of CMDC corpus☆42Updated 2 years ago
- A list of papers for emotion recognition using machine learning/deep learning.☆57Updated 4 years ago
- IEEE Transactions on Affective Computing 2023☆25Updated last year
- ☆16Updated last year
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆121Updated 3 years ago
- 该仓库存放了多模态情感分析实验的配套代码。☆41Updated 2 years ago
- 基于皮肤电信号的情绪识别算法☆20Updated 8 years ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆240Updated 2 years ago
- ☆53Updated 3 years ago
- 多模态情感分析模型实现☆10Updated last year
- 多模态,语音和文本结合的情感识别,大模型finetune☆21Updated last year
- Bachelor Thesis - Deep Learning-based Multi-modal Depression Estimation☆71Updated 2 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆45Updated 2 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆89Updated 2 years ago
- Automatic Depression Detection: a GRU/ BiLSTM-based Model and An Emotional Audio-Textual Corpus☆177Updated last year
- A Pytorch implementation of emotion recognition from videos☆18Updated 4 years ago
- Papers using E-DAIC dataset (AVEC 2019 DDS)☆32Updated 2 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆27Updated 4 years ago
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module☆78Updated 2 years ago
- ☆14Updated 6 years ago
- ☆14Updated 3 years ago
- Multimodal Emotion Recognition in a video using feature level fusion of audio and visual modalities☆15Updated 6 years ago
- ☆30Updated last year
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆27Updated 6 years ago
- depression-detect Predicting depression from AVEC2014 using ResNet18.☆49Updated last year