kugwzk / DiDELinks
Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”
☆31Updated 2 years ago
Alternatives and similar repositories for DiDE
Users that are interested in DiDE are comparing it to the libraries listed below
Sorting:
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆43Updated 3 years ago
- ☆108Updated 3 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 4 years ago
- [ACL 2021] Learning Relation Alignment for Calibrated Cross-modal Retrieval☆34Updated 2 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆35Updated 3 years ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆34Updated 2 years ago
- [ICML 2022] Code and data for our paper "IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages"☆49Updated 3 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆87Updated 3 years ago
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- Vision-Language Pretraining & Efficient Transformer Papers.☆15Updated 4 years ago
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆38Updated 2 years ago
- A reading list of papers about Visual Question Answering.☆35Updated 3 years ago
- NAACL 2022: MCSE: Multimodal Contrastive Learning of Sentence Embeddings☆58Updated last year
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 4 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated 2 years ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆69Updated 3 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆13Updated 5 months ago
- [ACL 2023] Delving into the Openness of CLIP☆24Updated 3 years ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆127Updated 4 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆44Updated 2 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- Mixture of Attention Heads☆51Updated 3 years ago