qizhou000 / VisEditLinks
[AAAI 2025 oral] Attribution Analysis Meets Model Editing: Advancing Knowledge Correction in Vision Language Models with VisEdit
☆17Updated 8 months ago
Alternatives and similar repositories for VisEdit
Users that are interested in VisEdit are comparing it to the libraries listed below
Sorting:
- ☆55Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆98Updated last year
- 关于LLM和Multimodal LLM的paper list☆50Updated 2 weeks ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆233Updated 2 months ago
- [NAACL 2025 Main] Official Implementation of MLLMU-Bench☆43Updated 9 months ago
- Latest Advances on Modality Priors in Multimodal Large Language Models☆29Updated last week
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆361Updated last year
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆47Updated 9 months ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆25Updated 10 months ago
- A curated list of awesome papers on dataset reduction, including dataset distillation (dataset condensation) and dataset pruning (coreset…☆59Updated 11 months ago
- [NeurIPS 2023] Generalized Logit Adjustment☆39Updated last year
- OOD Generalization相关文章的阅读笔记☆35Updated last year
- [CVPR 2025] Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts☆20Updated 6 months ago
- Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs☆30Updated 3 months ago
- ☆18Updated last year
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆171Updated 2 months ago
- Code for our ICML'24 on multimodal dataset distillation☆43Updated last year
- [ICLR 2025] Code for Self-Correcting Decoding with Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models☆24Updated 8 months ago
- List of papers about Large Multimodal model☆31Updated 6 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆107Updated last year
- Agentic MLLMs☆111Updated last month
- [ICLR 2025] "Noisy Test-Time Adaptation in Vision-Language Models"☆17Updated 9 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆96Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆36Updated last year
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆58Updated 2 months ago
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆93Updated 2 weeks ago
- Multimodal Large Language Model (MLLM) Tuning Survey: Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model☆90Updated 4 months ago
- Code for ICLR 2025 Paper: Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs☆22Updated 7 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆177Updated 2 months ago
- [CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answeri…☆43Updated 6 months ago