Gary-code / KECVQGLinks
[ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"
☆11Updated last year
Alternatives and similar repositories for KECVQG
Users that are interested in KECVQG are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering☆20Updated 6 months ago
- ☆67Updated 2 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- Official repository for the A-OKVQA dataset☆105Updated last year
- a multimodal retrieval dataset☆24Updated 2 years ago
- [IEEE TMM 2025 & ACL 2024 Findings] LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition☆35Updated 5 months ago
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆40Updated 3 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆45Updated last year
- [EMNLP’24 Main] Encoding and Controlling Global Semantics for Long-form Video Question Answering☆18Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆56Updated last year
- ☆27Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆44Updated 6 months ago
- ☆85Updated last year
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated last year
- ☆19Updated 2 years ago
- Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'☆17Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Updated last year
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆99Updated 2 years ago
- The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models☆16Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆233Updated 4 months ago
- Paper, dataset and code list for multimodal dialogue.☆22Updated 11 months ago
- ☆42Updated 2 years ago
- Code for our EMNLP-2022 paper: "Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning"☆16Updated 2 years ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 5 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago