Vision-CAIR / affectiveVisDialLinks
☆13Updated last year
Alternatives and similar repositories for affectiveVisDial
Users that are interested in affectiveVisDial are comparing it to the libraries listed below
Sorting:
- Explainable Multimodal Emotion Reasoning (EMER), OV-MER (ICML), and AffectGPT (ICML, Oral)☆260Updated 2 months ago
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆33Updated 2 years ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆56Updated last year
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆54Updated last year
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆35Updated 2 months ago
- GPT-4V with Emotion☆95Updated last year
- The official implement of paper S2-VER: Semi-Supervised Visual Emotion Recognition☆11Updated last year
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆52Updated 8 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆303Updated 9 months ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆54Updated 2 years ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆128Updated last year
- The datasets for image emotion computing☆37Updated 3 years ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆106Updated 8 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- ☆14Updated last year
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆172Updated 2 months ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆63Updated 2 years ago
- [NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations☆141Updated last year
- ☆13Updated 4 months ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆36Updated 6 months ago
- This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.☆23Updated last year
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆70Updated 11 months ago
- Repo for the EMNLP 2023 paper "A Simple Knowledge-Based Visual Question Answering"☆25Updated last year
- ☆22Updated 6 months ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆112Updated 3 years ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆156Updated 10 months ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆290Updated last year
- MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)☆114Updated 5 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆290Updated last year
- ☆95Updated 3 years ago