HITsz-TMG / Cognitive-Visual-Language-MapperLinks
The codes and datasets about our ACL 2024 Main Conference paper titled "Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment"
☆17Updated last year
Alternatives and similar repositories for Cognitive-Visual-Language-Mapper
Users that are interested in Cognitive-Visual-Language-Mapper are comparing it to the libraries listed below
Sorting:
- ☆68Updated 2 years ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆57Updated last year
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆171Updated 4 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆85Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆153Updated last year
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆46Updated last year
- [IEEE TMM 2025 & ACL 2024 Findings] LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition☆37Updated 6 months ago
- Implementation of "DIME-FM: DIstilling Multimodal and Efficient Foundation Models"☆15Updated 2 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆52Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 6 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆97Updated 2 years ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆145Updated last year
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆100Updated 2 years ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 7 months ago
- ☆88Updated last year
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆11Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆103Updated last year
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆54Updated 10 months ago
- [CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering☆21Updated 8 months ago
- Official resource for paper Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (ACL 20…☆15Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 8 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆71Updated 10 months ago
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆44Updated last year
- [CVPR25 Highlight] A ChatGPT-Prompted Visual hallucination Evaluation Dataset, featuring over 100,000 data samples and four advanced eval…☆31Updated 9 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆77Updated 6 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Updated 6 months ago
- Official repository for the A-OKVQA dataset☆109Updated last year