jiazhen-code / PhD
A Prompted Visual Hallucination Evaluation Dataset, featuring over 100,000 data points and four advanced evaluation modes. The dataset includes extensive contextual descriptions, counterintuitive images, and clear indicators of hallucination elements.
☆12Updated last week
Alternatives and similar repositories for PhD:
Users that are interested in PhD are comparing it to the libraries listed below
- [CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering☆17Updated 5 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆60Updated 6 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆89Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆42Updated 3 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆70Updated 3 weeks ago
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆20Updated 2 weeks ago
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆48Updated 5 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆80Updated 2 months ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆38Updated 11 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆32Updated 7 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆75Updated 10 months ago
- ☆35Updated 2 years ago
- ☆103Updated last week
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆33Updated 10 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 7 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆141Updated 9 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆45Updated 10 months ago
- The official implementation of paper "Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval" accepted by NeurIPS…☆22Updated 9 months ago
- ☆9Updated 11 months ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆63Updated 7 months ago
- ☆14Updated last year
- [ACL’24 Findings] Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives☆37Updated 5 months ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆131Updated 8 months ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆37Updated 9 months ago
- LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆51Updated this week
- ☆29Updated 7 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆39Updated last week
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆53Updated last month
- Instruction Tuning in Continual Learning paradigm☆39Updated 2 weeks ago