katha-ai / EmoTx-CVPR2023Links
[CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/2304.05634
☆56Updated 9 months ago
Alternatives and similar repositories for EmoTx-CVPR2023
Users that are interested in EmoTx-CVPR2023 are comparing it to the libraries listed below
Sorting:
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆33Updated last year
- [BMVC'23] Prompting Visual-Language Models for Dynamic Facial Expression Recognition☆129Updated 7 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆49Updated last year
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆48Updated 4 months ago
- [Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition☆111Updated 8 months ago
- GPT-4V with Emotion☆93Updated last year
- Official implementation of the NeurIPS2023 paper: Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognit…☆28Updated last year
- ☆62Updated 11 months ago
- [WACV 2024] Code release for "VEATIC: Video-based Emotion and Affect Tracking in Context Dataset"☆15Updated last year
- An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis (CVPR'21)☆44Updated 2 years ago
- ☆14Updated 3 years ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆34Updated 2 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆53Updated 10 months ago
- Explainable Multimodal Emotion Reasoning (EMER), Open-vocabulary MER (OV-MER), and AffectGPT☆201Updated last week
- Code for "Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations" (CVPR 2024 Oral)☆16Updated last year
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆49Updated 9 months ago
- [AAAI 2023 (Oral)] CrissCross: Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity☆25Updated 2 years ago
- ☆32Updated 2 years ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆100Updated 8 months ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆38Updated 10 months ago
- [TMM 2023] VideoXum: Cross-modal Visual and Textural Summarization of Videos☆45Updated last year
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆30Updated 7 months ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆108Updated 3 years ago
- [ECCV2022] The official repository of Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition☆24Updated last year
- [TAC 2024] SVFAP: Self-supervised Video Facial Affect Perceiver☆19Updated 9 months ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆20Updated last year
- This repository contains the code for our CVPR 2022 paper on "Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and …☆37Updated 2 years ago
- [ECCV2024] The official implementation of "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation".☆13Updated 4 months ago
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆100Updated last year