katha-ai / EmoTx-CVPR2023Links
[CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/2304.05634
☆58Updated last year
Alternatives and similar repositories for EmoTx-CVPR2023
Users that are interested in EmoTx-CVPR2023 are comparing it to the libraries listed below
Sorting:
- [BMVC'23] Prompting Visual-Language Models for Dynamic Facial Expression Recognition☆136Updated 11 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆57Updated last year
- [WACV 2024] Code release for "VEATIC: Video-based Emotion and Affect Tracking in Context Dataset"☆16Updated 2 months ago
- [Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition☆114Updated 2 months ago
- ☆69Updated last year
- Official implementation of the NeurIPS2023 paper: Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognit…☆32Updated 2 years ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆100Updated last year
- This is the official implementation of 2023 ICCV paper "EmoSet: A large-scale visual emotion dataset with rich attributes".☆57Updated last year
- GPT-4V with Emotion☆96Updated last year
- ☆14Updated 4 years ago
- [CVPR 2024] EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning☆36Updated 7 months ago
- [ECCV2022] The official repository of Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition☆24Updated 2 years ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆35Updated 3 months ago
- Code for "Modeling Multimodal Social Interactions: New Challenges and Baselines with Densely Aligned Representations" (CVPR 2024 Oral)☆17Updated last year
- [AAAI 2023 (Oral)] CrissCross: Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity☆25Updated 2 years ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆53Updated 9 months ago
- ☆19Updated 4 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- NeurIPS'2023 official implementation code☆68Updated 2 years ago
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.☆57Updated 2 weeks ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆48Updated last year
- Graph learning framework for long-term video understanding☆68Updated 4 months ago
- An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis (CVPR'21)☆46Updated 2 years ago
- This repository contains the code for our CVPR 2022 paper on "Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and …☆40Updated 2 years ago
- Question-Aware Gaussian Experts for Audio-Visual Question Answering -- Official Pytorch Implementation (CVPR'25, Highlight)☆24Updated 5 months ago
- ☆33Updated 2 years ago
- [ECCV2024] The official implementation of "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation".☆13Updated 8 months ago
- [TMM 2023] VideoXum: Cross-modal Visual and Textural Summarization of Videos☆51Updated last year
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆115Updated 3 years ago