HITsz-TMG / Cognitive-Visual-Language-MapperLinks
The codes and datasets about our ACL 2024 Main Conference paper titled "Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment"
☆17Updated last year
Alternatives and similar repositories for Cognitive-Visual-Language-Mapper
Users that are interested in Cognitive-Visual-Language-Mapper are comparing it to the libraries listed below
Sorting:
- ☆68Updated 2 years ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆244Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆11Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated 2 years ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆152Updated last year
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆77Updated last week
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆85Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆185Updated 4 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- the official repo for EMNLP 2024 (main) paper "EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimo…☆20Updated 9 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆51Updated last year
- [IEEE TMM 2025 & ACL 2024 Findings] LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition☆36Updated 6 months ago
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆45Updated last year
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 6 months ago
- natual language guided image captioning☆87Updated last year
- Official repository for the A-OKVQA dataset☆109Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 7 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆102Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 3 years ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆90Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Updated last year
- The official repo for RGCL:Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning and RA-HMD: Robust Adaptation o…☆30Updated last month
- ☆88Updated last year
- Code for our EMNLP-2022 paper: "Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning"☆16Updated 2 years ago
- ☆25Updated last year
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆15Updated last year