ForJadeForest / Lever-LMLinks
The Code for Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models
☆16Updated 9 months ago
Alternatives and similar repositories for Lever-LM
Users that are interested in Lever-LM are comparing it to the libraries listed below
Sorting:
- An in-context learning research testbed☆19Updated 4 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated 8 months ago
- A hot-pluggable tool for visualizing LLaVA's attention.☆20Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆293Updated 9 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆88Updated 7 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆143Updated 2 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆146Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆94Updated 7 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆73Updated 11 months ago
- ☆76Updated last year
- ☆102Updated last week
- [CVPR 2024] How to Configure Good In-Context Sequence for Visual Question Answering☆19Updated last month
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆30Updated last month
- ☆129Updated 5 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆212Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆126Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆95Updated last year
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆89Updated 7 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆65Updated 7 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆89Updated last year
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆62Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆85Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆48Updated this week
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆141Updated last week
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆289Updated 8 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆47Updated 3 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆77Updated 8 months ago