Vision-CAIR / affectiveVisDialLinks
☆13Updated last year
Alternatives and similar repositories for affectiveVisDial
Users that are interested in affectiveVisDial are comparing it to the libraries listed below
Sorting:
- Explainable Multimodal Emotion Reasoning (EMER), Open-vocabulary MER (OV-MER), and AffectGPT☆205Updated last week
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆31Updated last year
- GPT-4V with Emotion☆93Updated last year
- The datasets for image emotion computing☆36Updated 3 years ago
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆49Updated last year
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆53Updated 10 months ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆54Updated 2 years ago
- The official implement of paper S2-VER: Semi-Supervised Visual Emotion Recognition☆11Updated last year
- The code for the paper "ECR-Chain: Advancing Generative Language Models to Better Emotion Cause Reasoners through Reasoning Chains" (IJCA…☆11Updated last year
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆48Updated 4 months ago
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models☆14Updated last year
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆33Updated last year
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆34Updated 2 months ago
- The official implementation of ECCV2024 paper "Facial Affective Behavior Analysis with Instruction Tuning"☆26Updated 6 months ago
- ☆11Updated last month
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆60Updated 2 years ago
- This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.☆22Updated 11 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆292Updated 6 months ago
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆66Updated 8 months ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆20Updated last year
- Training A Small Emotional Vision Language Model for Visual Art Comprehension☆16Updated 11 months ago
- ☆23Updated 2 months ago
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆56Updated 9 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆186Updated last year
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆126Updated last year
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆49Updated 9 months ago
- ☆62Updated 11 months ago
- ☆14Updated 2 weeks ago
- ☆16Updated 4 years ago
- MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)☆97Updated 2 months ago