Ziwei-Zheng / VaLSeLinks
A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).
☆34Updated last month
Alternatives and similar repositories for VaLSe
Users that are interested in VaLSe are comparing it to the libraries listed below
Sorting:
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆35Updated 4 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 7 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆31Updated 8 months ago
- ☆44Updated last month
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆19Updated 4 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- ☆47Updated 7 months ago
- ☆126Updated 5 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆78Updated 4 months ago
- Instruction Tuning in Continual Learning paradigm☆53Updated 5 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆37Updated last year
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆63Updated 7 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆46Updated 6 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆89Updated 7 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆57Updated 6 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated last year
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆19Updated 8 months ago
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆24Updated 2 months ago
- ☆88Updated 3 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆16Updated 2 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆52Updated 8 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆143Updated 2 months ago
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆24Updated 3 weeks ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆293Updated 9 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆40Updated this week
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆60Updated last year
- ☆57Updated 8 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆85Updated last year
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆28Updated 4 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year