FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models
☆33Nov 27, 2025Updated 4 months ago
Alternatives and similar repositories for FAITHSCORE
Users that are interested in FAITHSCORE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- HallE-Control: Controlling Object Hallucination in LMMs☆32Apr 10, 2024Updated 2 years ago
- Some papers about *diverse* image (a few videos) captioning☆26Apr 4, 2023Updated 3 years ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆255Aug 21, 2025Updated 7 months ago
- ☆18Aug 1, 2024Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆101Jan 30, 2024Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆57Oct 28, 2024Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆162Jan 15, 2024Updated 2 years ago
- Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)☆18Oct 18, 2024Updated last year
- GeckoNum Benchmark for T2I Model Eval.☆15Dec 5, 2024Updated last year
- The Official Code Repo for EgoOrientBench [CVPR25]☆15Nov 24, 2025Updated 4 months ago
- ☆38May 12, 2025Updated 10 months ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆49Aug 21, 2024Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆93Apr 30, 2024Updated last year
- Code and data for NAACL 2025 paper "IHEval: Evaluating Language Models on Following the Instruction Hierarchy"☆16Feb 25, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- This repository is unmaintained, please see lumo for details.☆10Mar 19, 2023Updated 3 years ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆74Oct 16, 2024Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆87Oct 26, 2025Updated 5 months ago
- ☆95Mar 29, 2019Updated 7 years ago
- Learning visually grounded word embeddings using Abstract scenes☆18Mar 1, 2019Updated 7 years ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆69May 31, 2024Updated last year
- Concept Learning Dynamics☆16Oct 29, 2024Updated last year
- ☆17Jul 23, 2025Updated 8 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Apr 21, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Davidsonian Scene Graph (DSG) for Text-to-Image Evaluation (ICLR 2024)☆106Dec 9, 2024Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆47Nov 10, 2024Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆40Apr 18, 2025Updated 11 months ago
- ☆19Feb 21, 2024Updated 2 years ago
- Code for the paper "Controllable Video Captioning with an Exemplar Sentence"☆12Apr 14, 2021Updated 4 years ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆112Dec 4, 2024Updated last year
- ☆12Oct 2, 2020Updated 5 years ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆40Nov 10, 2024Updated last year
- Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets☆12May 25, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆119Feb 11, 2025Updated last year
- Training code for CLIP-FlanT5☆31Jul 29, 2024Updated last year
- NeurIPS 2024 - Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation☆18Dec 5, 2024Updated last year
- ☆23Jan 19, 2026Updated 2 months ago
- ☆19Dec 6, 2023Updated 2 years ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆20Mar 28, 2024Updated 2 years ago
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆71Feb 28, 2024Updated 2 years ago