shikiw / OPERALinks
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
☆341Updated 9 months ago
Alternatives and similar repositories for OPERA
Users that are interested in OPERA are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆278Updated 7 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆282Updated 6 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆207Updated last year
- A RLHF Infrastructure for Vision-Language Models☆177Updated 6 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆90Updated 6 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆279Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆278Updated 8 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆218Updated last week
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆429Updated 4 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆120Updated 6 months ago
- Visualizing the attention of vision-language models☆176Updated 3 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆197Updated last month
- ☆334Updated last year
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆129Updated 3 weeks ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆120Updated last year
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆164Updated 2 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆70Updated 10 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 11 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆171Updated this week
- ☆100Updated last month
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆305Updated 2 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆318Updated 5 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆701Updated last month
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆301Updated 4 months ago
- Efficient Multimodal Large Language Models: A Survey☆349Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆182Updated 8 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆313Updated last year
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆111Updated 10 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆340Updated 4 months ago