Ziwei-Zheng / LVLM-Stethoscope
A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).
☆22Updated last month
Alternatives and similar repositories for LVLM-Stethoscope:
Users that are interested in LVLM-Stethoscope are comparing it to the libraries listed below
- Code for paper: Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection☆13Updated last month
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 10 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆80Updated 2 months ago
- [Arxiv 2024] AGLA: Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆21Updated 7 months ago
- ☆41Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆78Updated 10 months ago
- Instruction Tuning in Continual Learning paradigm☆39Updated 2 weeks ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆21Updated 3 months ago
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆20Updated 2 weeks ago
- HallE-Control: Controlling Object Hallucination in LMMs☆29Updated 10 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 7 months ago
- 🔥🔥🔥Code for "Empowering Multimodal Large Language Models with Evol-Instruct"☆12Updated last week
- Code release for VTW (AAAI 2025) Oral☆32Updated last month
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆75Updated 10 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆65Updated last year
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆42Updated last month
- [arXiv] Cross-Modal Adapter for Text-Video Retrieval☆55Updated 2 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- The official implementation of 《MLLMs-Augmented Visual-Language Representation Learning》☆31Updated 11 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆92Updated 3 weeks ago
- Official pytorch implementation of "RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language…☆10Updated 2 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆111Updated last year
- [NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models".☆34Updated 3 months ago
- [ICCV2023] - CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation☆31Updated 4 months ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆163Updated 2 years ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆27Updated 2 months ago
- [NeurIPS 2023] Generalized Logit Adjustment☆34Updated 10 months ago
- ☆24Updated 9 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆47Updated last month
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆197Updated 10 months ago