innerfirexy / Life-lessonsLinks
A dataset of first-person monologue videos/transcript/annotations about "life lessons" in various domains. The main purpose is for multi-modal language analysis and modeling.
☆17Updated last year
Alternatives and similar repositories for Life-lessons
Users that are interested in Life-lessons are comparing it to the libraries listed below
Sorting:
- Code base for "Detecting Subtle Differences between Human and Model Languages Using Spectrum of Relative Likelihood"☆14Updated 6 months ago
- A comprehensive overview of affective computing research in the era of large language models (LLMs).☆30Updated last year
- ☆14Updated 8 months ago
- Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".☆36Updated 2 years ago
- ☆103Updated 7 months ago
- Code and dataset release for "PACS: A Dataset for Physical Audiovisual CommonSense Reasoning" (ECCV 2022)☆17Updated 3 years ago
- ☆14Updated last year
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated 2 years ago
- Danmuku dataset☆11Updated 2 years ago
- This is the official code repository for the paper 'Cross-modality Data Augmentation for End-to-End Sign Language Translation'. Accepted…☆16Updated 2 years ago
- This repository contains code and metadata of How2 dataset☆192Updated last year
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆36Updated 2 years ago
- ☆73Updated last year
- Code for ACL 2023 main conference paper "CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation"☆17Updated last year
- Large-Vocabulary Continuous Sign Language Recognition, 2024☆15Updated last year
- Social Chemistry 101: Learning to Reason about Social and Moral Norms☆34Updated 2 years ago
- ☆27Updated 9 months ago
- [ICASSP2024] Code for paper "SDIF-DA: A Shallow-to-Deep Interaction Framework with Data Augmentation for Multi-modal Intent Detection"☆15Updated last year
- [ACL 2024] A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset☆25Updated 8 months ago
- Data for evaluating GPT-4V☆11Updated 2 years ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- av-SALMONN: Speech-Enhanced Audio-Visual Large Language Models☆13Updated last year
- M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database. ACL 2022☆121Updated 3 years ago
- ACM MM 2022 paper_AVQA: A Dataset for Audio-Visual Question Answering on Videos☆15Updated 2 years ago
- Humor Knowledge Enriched Transformer☆32Updated 4 years ago
- This is the repository of our ACL 2024 paper "ESCoT: Towards Interpretable Emotional Support Dialogue Systems".☆36Updated 9 months ago
- Pytorch code for EMNLP 2023 accepted-main paper "How to Enhance Causal Discrimination of Utterances: A Case on Affective Reasoning" and …☆18Updated last year
- Frozen Pretrained Transformers for Neural Sign Language Translation☆15Updated 3 years ago
- Pytorch code for TAC accepted paper: "Cluster-Level Contrastive Learning for Emotion Recognition in Conversations"☆25Updated 2 years ago
- Awesome-Emotion-Reasoning is a collection of Emotion-Reasoning works, including papers, codes and datasets☆76Updated last month