bronyayang / HallE_Control
HallE-Control: Controlling Object Hallucination in LMMs
☆28Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for HallE_Control
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆33Updated 3 weeks ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆73Updated 7 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆68Updated 6 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆72Updated 7 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆66Updated 9 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆41Updated 4 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆56Updated 2 months ago
- 😎 up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.☆145Updated 7 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆43Updated 5 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆47Updated 2 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆72Updated 2 weeks ago
- ☆24Updated 4 months ago
- Instruction Tuning in Continual Learning paradigm☆26Updated 4 months ago
- ☆47Updated 4 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆27Updated this week
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆41Updated last year
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆19Updated 2 weeks ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆47Updated 3 months ago
- Code for paper "AGLA: Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention"☆16Updated 4 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆48Updated 5 months ago
- A collection of visual instruction tuning datasets.☆76Updated 8 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆182Updated 7 months ago
- [ECCV2024] Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models☆15Updated 4 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆210Updated last month
- [Preprint] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆40Updated last week
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆34Updated 7 months ago
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆39Updated 4 months ago
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆66Updated 3 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆135Updated 6 months ago
- ☆85Updated 11 months ago