thunlp / CLEVERLinks
☆22Updated 3 years ago
Alternatives and similar repositories for CLEVER
Users that are interested in CLEVER are comparing it to the libraries listed below
Sorting:
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆100Updated 2 years ago
- ☆107Updated 3 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆27Updated 3 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆49Updated 3 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 4 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆130Updated 4 years ago
- ☆30Updated 3 years ago
- The code of IJCAI2022 paper, Declaration-based Prompt Tuning for Visual Question Answering☆20Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆92Updated 2 years ago
- natual language guided image captioning☆87Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated 2 years ago
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆87Updated 3 years ago
- A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)☆149Updated 5 years ago
- GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering☆65Updated 4 years ago
- ☆43Updated 2 years ago
- [IEEE TMM 2025 & ACL 2024 Findings] LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition☆37Updated 6 months ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 4 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆167Updated last year
- ☆47Updated 3 weeks ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆116Updated 3 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 3 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- ☆25Updated 3 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆208Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated last year
- Official Implementation for CVPR 2022 paper "Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language …☆24Updated 3 years ago
- MixGen: A New Multi-Modal Data Augmentation☆126Updated 3 years ago