matalvepu / HKTLinks
Humor Knowledge Enriched Transformer
☆31Updated 4 years ago
Alternatives and similar repositories for HKT
Users that are interested in HKT are comparing it to the libraries listed below
Sorting:
- This repository presents UR-FUNNY dataset: first dataset for multimodal humor detection☆150Updated 4 years ago
- ☆48Updated 6 years ago
- ☆53Updated 4 years ago
- NAACL 2022 paper on Analyzing Modality Robustness in Multimodal Sentiment Analysis☆31Updated 2 years ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- [ACM MM 2021 Oral] Exploiting BERT For Multimodal Target Sentiment Classification Through Input Space Translation"☆40Updated 4 years ago
- Code and Data for the ACL22 main conference paper "MSCTD: A Multimodal Sentiment Chat Translation Dataset"☆42Updated 11 months ago
- ☆212Updated 4 years ago
- The official code of our paper at EMNLP 2022: Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue Mo…☆16Updated 2 years ago
- ☆44Updated 6 months ago
- The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".☆55Updated 3 years ago
- source code for ICASSP 2022 paper: EmotionFlow: Capture the Dialogue Level Emotion Transitions☆27Updated 3 years ago
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 3 years ago
- Code for ACL 2022 main conference paper "Neural Machine Translation with Phrase-Level Universal Visual Representations".☆21Updated 2 years ago
- Official code for our COLING 2022 paper: In-Context Learning for Empathetic Dialogue Generation☆21Updated 2 years ago
- Audio Visual Scene-Aware Dialog (AVSD) Challenge at the 10th Dialog System Technology Challenge (DSTC)☆27Updated 3 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 4 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆83Updated 3 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 3 years ago
- Code, Models and Datasets for OpenViDial Dataset☆132Updated 3 years ago
- NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings☆58Updated last year
- Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)☆64Updated 2 years ago
- [ICLR 2019] Learning Factorized Multimodal Representations☆67Updated 5 years ago
- Multimodal datasets.☆33Updated last year
- We rank the 1st in DSTC8 Audio-Visual Scene-Aware Dialog competition. This is the source code for our IEEE/ACM TASLP (AAAI2020-DSTC8-AVSD…☆56Updated 2 years ago
- EACL 2023 paper "MLASK: Multimodal Summarization of Video-based News Articles"☆12Updated 2 years ago
- ☆25Updated 3 years ago
- ☆49Updated 2 years ago
- DSTC10 Track1 - MOD: Internet Meme Incorporated Open-domain Dialog☆51Updated 2 years ago