Atomic-man007 / Awesome_Multimodel_LLM
Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
β292Updated last week
Alternatives and similar repositories for Awesome_Multimodel_LLM:
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below
- Efficient Multimodal Large Language Models: A Surveyβ312Updated 6 months ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Modelsβ247Updated last month
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β578Updated last month
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.β585Updated this week
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Alloβ¦β309Updated 5 months ago
- β398Updated 4 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agentsβ307Updated 10 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"β182Updated 5 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β493Updated 9 months ago
- β308Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β327Updated last month
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big modelsβ286Updated 2 weeks ago
- A curated list of awesome Multimodal studies.β134Updated this week
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,β¦β244Updated 2 weeks ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthinessβ291Updated 2 months ago
- β¨β¨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ456Updated 2 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ346Updated 3 weeks ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKUβ342Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β227Updated last month
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β364Updated 3 weeks ago
- π curated list of awesome LMM hallucinations papers, methods & resources.β147Updated 10 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuningβ269Updated 11 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ331Updated 2 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β428Updated 3 months ago
- Awesome papers & datasets specifically focused on long-term videos.β245Updated 3 months ago
- A Framework of Small-scale Large Multimodal Modelsβ740Updated 2 weeks ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β270Updated 3 months ago
- Research Trends in LLM-guided Multimodal Learning.β357Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ264Updated 5 months ago