innerfirexy / Life-lessonsLinks
A dataset of first-person monologue videos/transcript/annotations about "life lessons" in various domains. The main purpose is for multi-modal language analysis and modeling.
☆17Updated last year
Alternatives and similar repositories for Life-lessons
Users that are interested in Life-lessons are comparing it to the libraries listed below
Sorting:
- Code base for "Detecting Subtle Differences between Human and Model Languages Using Spectrum of Relative Likelihood"☆14Updated 6 months ago
- ☆14Updated last year
- A comprehensive overview of affective computing research in the era of large language models (LLMs).☆30Updated last year
- [ACM MM 2022]: Multi-Modal Experience Inspired AI Creation☆21Updated last year
- ☆104Updated 7 months ago
- Official implementation of our paper at ACL 2023: Pre-training Multi-party Dialogue Models with Latent Discourse Inference☆10Updated 2 years ago
- Code and dataset release for "PACS: A Dataset for Physical Audiovisual CommonSense Reasoning" (ECCV 2022)☆17Updated 3 years ago
- The Social-IQ 2.0 Challenge Release for the Artificial Social Intelligence Workshop at ICCV '23☆36Updated 2 years ago
- ☆38Updated 10 months ago
- ☆14Updated 8 months ago
- Humor Knowledge Enriched Transformer☆32Updated 4 years ago
- [ACL 2024] A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset☆25Updated 8 months ago
- ☆16Updated 5 years ago
- Large-Vocabulary Continuous Sign Language Recognition, 2024☆15Updated last year
- ☆27Updated 9 months ago
- Public repo for the paper: "Modeling Intensification for Sign Language Generation: A Computational Approach" by Mert Inan*, Yang Zhong*, …☆14Updated 3 years ago
- M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database. ACL 2022☆121Updated 3 years ago
- Data for evaluating GPT-4V☆11Updated 2 years ago
- 🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets A…☆79Updated 3 weeks ago
- A PyTorch implementation of paper "Learning Shared Semantic Space for Speech-to-Text Translation", ACL (Findings) 2021☆48Updated 3 years ago
- ☆18Updated 7 months ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆53Updated 3 years ago
- Code for ACL 2023 main conference paper "CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation"☆17Updated last year
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- This repository contains code and metadata of How2 dataset☆192Updated last year
- [ICASSP2024] Code for paper "SDIF-DA: A Shallow-to-Deep Interaction Framework with Data Augmentation for Multi-modal Intent Detection"☆15Updated last year
- Frozen Pretrained Transformers for Neural Sign Language Translation☆15Updated 3 years ago
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models☆19Updated last year
- Narrative movie understanding benchmark☆76Updated 7 months ago
- [CVPR 2024] MMSum: A Dataset for Multimodal Summarization and Thumbnail Generation of Videos☆37Updated last year