gokulkarthik / hateclipper
Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Workshop
☆42Updated last year
Related projects ⓘ
Alternatives and complementary repositories for hateclipper
- Dataset and Code for Multimodal Fact Checking and Explanation Generation (Mocheg)☆39Updated 11 months ago
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆68Updated last year
- Code and data for ImageCoDe, a contextual vison-and-language benchmark☆39Updated 8 months ago
- ☆19Updated 7 months ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆26Updated 9 months ago
- ☆33Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆32Updated 2 months ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated last year
- Code for our EMNLP 2023 paper - Beneath the Surface: Unveiling Harmful Memes with Multimodal Reasoning Distilled from Large Language Mode…☆10Updated 6 months ago
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆19Updated 7 months ago
- [EMNLP'21] Visual News: Benchmark and Challenges in News Image Captioning☆86Updated 3 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆43Updated 4 months ago
- Implementation of our ACL2023 paper: Unifying Cross-Lingual and Cross-Modal Modeling Towards Weakly Supervised Multilingual Vision-Langua…☆15Updated last year
- ☆18Updated 3 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆133Updated last year
- ☆25Updated this week
- SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection☆30Updated 2 months ago
- a multimodal retrieval dataset☆22Updated last year
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆25Updated 11 months ago
- Corpus to accompany: "Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest"☆52Updated this week
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆44Updated 9 months ago
- This repository contains code to evaluate various multimodal large language models using different instructions across multiple multimoda…☆24Updated 6 months ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆60Updated 2 years ago
- Code for our EMNLP-2022 paper: "Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning"☆12Updated last year
- ☆40Updated last year
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated last year
- EMNLP 2023 Papers: Explore cutting-edge research from EMNLP 2023, the premier conference for advancing empirical methods in natural langu…☆97Updated 5 months ago
- ☆111Updated 2 years ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆90Updated 2 months ago
- A curated list of vision-and-language pre-training (VLP). :-)☆56Updated 2 years ago