Vision-CAIR / affectiveVisDialLinks
☆13Updated last year
Alternatives and similar repositories for affectiveVisDial
Users that are interested in affectiveVisDial are comparing it to the libraries listed below
Sorting:
- Explainable Multimodal Emotion Reasoning (EMER), OV-MER (ICML), and AffectGPT (ICML, Oral)☆302Updated 4 months ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- GPT-4V with Emotion☆97Updated 2 years ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆36Updated 4 months ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆54Updated 9 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆57Updated last year
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆38Updated 7 months ago
- The datasets for image emotion computing☆39Updated 3 years ago
- The official implement of paper S2-VER: Semi-Supervised Visual Emotion Recognition☆11Updated last year
- Repo for the EMNLP 2023 paper "A Simple Knowledge-Based Visual Question Answering"☆25Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆190Updated last year
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆64Updated 3 years ago
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆60Updated last year
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆35Updated 2 years ago
- ☆69Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆305Updated 11 months ago
- Official repository for the paper “MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language M…☆17Updated 3 months ago
- ☆20Updated 5 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆131Updated 2 years ago
- [ICML'25 Spotlight] Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models☆42Updated 2 weeks ago
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆73Updated last year
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆179Updated 4 months ago
- Training A Small Emotional Vision Language Model for Visual Art Comprehension☆15Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated 10 months ago
- [AAAI 2023] AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work☆22Updated 3 weeks ago
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆58Updated last month
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated last year
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆58Updated last year
- An official implementation of "Incomplete Multimodality-Diffused Emotion Recognition" in PyTorch. (NeurIPS 2023)☆59Updated 2 years ago