assafbk / mocha_codeLinks
Mitigating Open-Vocabulary Caption Hallucinations (EMNLP 2024)
☆17Updated last year
Alternatives and similar repositories for mocha_code
Users that are interested in mocha_code are comparing it to the libraries listed below
Sorting:
- text-only training or language-free training for multimodal tasks (image/audio/video caption, retrieval, text2image)☆11Updated last year
- ☆11Updated 8 months ago
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆12Updated last year
- [ECCV'24] Official Implementation of Autoregressive Visual Entity Recognizer.☆14Updated last year
- An automatic MLLM hallucination detection framework☆19Updated 2 years ago
- ☆11Updated last year
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆48Updated 3 months ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆30Updated 7 months ago
- ☆18Updated last year
- ☆24Updated 2 years ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆31Updated 3 weeks ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated 11 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 9 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆25Updated 2 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆73Updated 3 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆50Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆93Updated last month
- ☆14Updated 7 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆37Updated 6 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆32Updated last month
- Official implement of MIA-DPO☆66Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆66Updated 4 months ago
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 7 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 7 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆76Updated 3 weeks ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 4 months ago
- ☆43Updated 11 months ago