limanling / clip-event
☆102Updated 3 years ago
Alternatives and similar repositories for clip-event
Users that are interested in clip-event are comparing it to the libraries listed below
Sorting:
- Cross-media Structured Common Space for Multimedia Event Extraction (ACL2020)☆72Updated last year
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation☆31Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆57Updated last year
- ☆35Updated 2 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆41Updated 3 years ago
- [Paper][IJCKG 2022] LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection☆26Updated last year
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆63Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆121Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- Recent Advances in Visual Dialog☆30Updated 2 years ago
- VisualCOMET: Reasoning about the Dynamic Context of a Still Image☆85Updated last year
- ☆28Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆95Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- Source code and data for Things not Written in Text: Exploring Spatial Commonsense from Visual Signals (ACL2022 main conference paper).☆20Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 2 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated 2 years ago
- PyTorch code for Improving Commonsense in Vision-Language Models via Knowledge Graph Riddles (DANCE)☆23Updated 2 years ago
- ☆39Updated 2 years ago
- Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).☆28Updated 3 years ago
- pre-trained vision and language model summary☆13Updated 4 years ago
- Repository for VisualSem: a high-quality knowledge graph to support research in vision and language.☆88Updated 2 years ago
- Resource and Code for ICME 2021 paper "MNRE: A Challenge Multimodal Dataset for Neural Relation Extraction with Visual Evidence in Social…☆55Updated 3 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 2 years ago
- Dataset and code for EMNLP 2022 "Visual Named Entity Linking: A New Dataset and A Baseline"☆25Updated 2 years ago