thunlp / Muffin
☆59Updated 11 months ago
Alternatives and similar repositories for Muffin:
Users that are interested in Muffin are comparing it to the libraries listed below
- Official repository of MMDU dataset☆80Updated 3 months ago
- ☆132Updated last year
- ☆94Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆77Updated 11 months ago
- ☆134Updated 2 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆24Updated 6 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆40Updated 6 months ago
- ☆87Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆77Updated 9 months ago
- A RLHF Infrastructure for Vision-Language Models☆145Updated 2 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆154Updated 3 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆67Updated last month
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 3 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆95Updated 2 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆42Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆257Updated 4 months ago
- ☆47Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆33Updated 2 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆90Updated 2 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆56Updated 4 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆55Updated 2 months ago
- ☆42Updated 5 months ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆78Updated 3 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆59Updated 3 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆80Updated last year
- Official implement of MIA-DPO☆49Updated 2 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆160Updated 3 months ago