Atomic-man007 / Awesome_Multimodel_LLMLinks
Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
☆361Updated 10 months ago
Alternatives and similar repositories for Awesome_Multimodel_LLM
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below
Sorting:
- Efficient Multimodal Large Language Models: A Survey☆387Updated 9 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆979Updated 4 months ago
- ☆484Updated last year
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆956Updated 2 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆397Updated last year
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆290Updated 6 months ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆925Updated last week
- A curated list of awesome Multimodal studies.☆312Updated last month
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆286Updated 8 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- Awesome papers & datasets specifically focused on long-term videos.☆352Updated 4 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆364Updated 2 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆424Updated last year
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆266Updated last year
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆381Updated 11 months ago
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆754Updated 3 weeks ago
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆760Updated 2 weeks ago
- A Survey on Benchmarks of Multimodal Large Language Models☆147Updated 7 months ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆1,349Updated 2 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆443Updated 8 months ago
- Research Trends in LLM-guided Multimodal Learning.☆357Updated 2 years ago
- List of papers about Large Multimodal model☆31Updated 8 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆477Updated last year
- Aligning LMMs with Factually Augmented RLHF☆392Updated 2 years ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆556Updated last year
- ☆360Updated 2 years ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆378Updated last year
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆840Updated 8 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆360Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆306Updated last year