vincentlux / Awesome-Multimodal-LLM
Reading list for Multimodal Large Language Models
☆68Updated last year
Alternatives and similar repositories for Awesome-Multimodal-LLM:
Users that are interested in Awesome-Multimodal-LLM are comparing it to the libraries listed below
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆120Updated 3 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated last year
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆146Updated 9 months ago
- ☆59Updated 11 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆71Updated last week
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆87Updated 11 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆77Updated 11 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆59Updated 3 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆105Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆193Updated 9 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆278Updated 2 months ago
- A RLHF Infrastructure for Vision-Language Models☆145Updated 2 months ago
- InstructionGPT-4☆38Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆77Updated 9 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆93Updated 3 weeks ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆117Updated last month
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆76Updated 6 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆139Updated 8 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆79Updated 2 weeks ago
- ☆94Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆42Updated last year
- A collection of visual instruction tuning datasets.☆76Updated 10 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆229Updated 3 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆51Updated 2 months ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆78Updated 3 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆80Updated last year
- ☆47Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆24Updated 6 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆17Updated 7 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆78Updated 3 weeks ago