affect2mm / emotion-timeseriesLinks
☆16Updated 4 years ago
Alternatives and similar repositories for emotion-timeseries
Users that are interested in emotion-timeseries are comparing it to the libraries listed below
Sorting:
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆54Updated 2 years ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆62Updated 2 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆80Updated last year
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 2 years ago
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆52Updated last year
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆32Updated 9 months ago
- Learning Interactions and Relationships between Movie Characters (CVPR'20)☆21Updated 2 years ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆52Updated 7 months ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆35Updated last month
- NAACL 2022 paper on Analyzing Modality Robustness in Multimodal Sentiment Analysis☆31Updated 2 years ago
- [ACM MM 2021 Oral] Exploiting BERT For Multimodal Target Sentiment Classification Through Input Space Translation"☆40Updated 4 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 4 years ago
- ☆28Updated 3 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆43Updated 2 years ago
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- ☆210Updated 3 years ago
- This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment An…☆71Updated 2 years ago
- MUSIC-AVQA, CVPR2022 (ORAL)☆88Updated 2 years ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆20Updated 2 years ago
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆69Updated 10 months ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆82Updated 4 years ago
- This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.☆23Updated last year
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆111Updated 3 years ago
- My implementation for the paper Context-Aware Emotion Recognition Networks☆30Updated 3 years ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆36Updated 5 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆126Updated 2 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆52Updated 3 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆126Updated 6 months ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆30Updated 4 years ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆47Updated last year