NJUNLP / Hallu-PILinks
The code and datasets of our ACM MM 2024 paper "Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs".
☆10Updated 9 months ago
Alternatives and similar repositories for Hallu-PI
Users that are interested in Hallu-PI are comparing it to the libraries listed below
Sorting:
- [ACL 2025 Main] Open-source toolkit for automatic evaluation of text-to-image generation task, including training & test datasets and a d…☆12Updated 2 weeks ago
- ☆17Updated 11 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆33Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 10 months ago
- An automatic MLLM hallucination detection framework☆19Updated last year
- [ACL 2025 Findings] Official pytorch implementation of "Don't Miss the Forest for the Trees: Attentional Vision Calibration for Large Vis…☆17Updated last year
- ☆19Updated last year
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆21Updated 3 weeks ago
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆30Updated this week
- [ACL'25] Mosaic-IT: Cost-Free Compositional Data Synthesis for Instruction Tuning☆19Updated 3 weeks ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- Codes for ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding [ICML 2025]]☆36Updated last week
- ☆22Updated 2 months ago
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆15Updated last month
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆32Updated last year
- RACE is a multi-dimensional benchmark for code generation that focuses on Readability, mAintainability, Correctness, and Efficiency.☆10Updated 9 months ago
- ☆38Updated last year
- [ACL 2023] Delving into the Openness of CLIP☆23Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated last year
- ☆22Updated 2 months ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆29Updated last year
- ☆15Updated 2 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆74Updated 8 months ago
- Unsupervised GRPO☆39Updated last month
- [CVPR2025] VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding☆16Updated 3 months ago
- Code for Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model☆12Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆32Updated last year