assafbk / mocha_codeLinks
Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)
☆19Updated last year
Alternatives and similar repositories for mocha_code
Users that are interested in mocha_code are comparing it to the libraries listed below
Sorting:
- ☆24Updated 2 years ago
- An automatic MLLM hallucination detection framework☆19Updated 2 years ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆27Updated 5 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Updated 6 months ago
- [NeurIPS2024] Official code for (IMA) Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs☆24Updated last year
- Official repository of "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach" (ACL 2024 Oral)☆34Updated 10 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- Question-Aware Gaussian Experts for Audio-Visual Question Answering -- Official Pytorch Implementation (CVPR'25, Highlight)☆26Updated 8 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆61Updated last year
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆34Updated last year
- ☆18Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆56Updated 10 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- Preference Learning for LLaVA☆59Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- ☆11Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 9 months ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- ☆19Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆42Updated last month
- ☆17Updated last year
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆47Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆47Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆88Updated 4 months ago
- text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)☆12Updated last year
- PyTorch implementation of StableMask (ICML'24)☆15Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆45Updated 9 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Updated 3 months ago