jaeyun95 / pre-trained-vlk-modelLinks
pre-trained vision and language model summary
☆12Updated 4 years ago
Alternatives and similar repositories for pre-trained-vlk-model
Users that are interested in pre-trained-vlk-model are comparing it to the libraries listed below
Sorting:
- ☆44Updated 3 months ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- Multitask Multilingual Multimodal Pre-training☆71Updated 2 years ago
- ☆106Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆125Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 4 years ago
- A reading list of papers about Visual Question Answering.☆33Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERT…☆21Updated 4 years ago
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆27Updated 4 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆78Updated 2 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021☆66Updated 3 years ago
- VisualCOMET: Reasoning about the Dynamic Context of a Still Image☆88Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆98Updated 2 years ago
- ☆38Updated 2 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆43Updated 3 years ago
- ☆14Updated 4 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆51Updated 3 years ago
- Official code and dataset link for ''VMSMO: Learning to Generate Multimodal Summary for Video-based News Articles''☆36Updated 4 years ago
- ☆16Updated 3 years ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆34Updated 2 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated 2 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated 2 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆92Updated last year
- Data Release for VALUE Benchmark☆31Updated 3 years ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 4 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago