assafbk / mocha_codeLinks
Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)
☆17Updated last year
Alternatives and similar repositories for mocha_code
Users that are interested in mocha_code are comparing it to the libraries listed below
Sorting:
- ☆11Updated 9 months ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆30Updated 7 months ago
- ☆11Updated last year
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- ☆24Updated 2 years ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆65Updated last month
- An automatic MLLM hallucination detection framework☆19Updated 2 years ago
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 8 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆38Updated 2 weeks ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆41Updated 5 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆50Updated 3 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆80Updated last month
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆57Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- [ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models☆86Updated last year
- Preference Learning for LLaVA☆52Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 5 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 8 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆94Updated 2 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated 2 weeks ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆46Updated last year
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆33Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆34Updated last year