Atomic-man007 / Awesome_Multimodel_LLMLinks
Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
☆341Updated 5 months ago
Alternatives and similar repositories for Awesome_Multimodel_LLM
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below
Sorting:
- Efficient Multimodal Large Language Models: A Survey☆371Updated 4 months ago
- ☆465Updated 11 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆801Updated 3 weeks ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆366Updated last year
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆628Updated 3 weeks ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆288Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆333Updated 6 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆831Updated last month
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆248Updated 3 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆309Updated last month
- A curated list of awesome Multimodal studies.☆270Updated last month
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆736Updated last week
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆348Updated 8 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆354Updated last year
- Research Trends in LLM-guided Multimodal Learning.☆355Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆370Updated 8 months ago
- ☆350Updated last year
- Aligning LMMs with Factually Augmented RLHF☆375Updated last year
- A Survey on Benchmarks of Multimodal Large Language Models☆137Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆897Updated 4 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆447Updated 7 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆392Updated 4 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆362Updated 6 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆535Updated last year
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆239Updated last year
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages …☆688Updated this week
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆313Updated 11 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆872Updated 6 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆410Updated 4 months ago