Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
β364Mar 19, 2025Updated last year
Alternatives and similar repositories for Awesome_Multimodel_LLM
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below
Sorting:
- Latest Advances on Multimodal Large Language Modelsβ17,466Mar 12, 2026Updated last week
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β994Sep 27, 2025Updated 5 months ago
- Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.β505Jun 24, 2025Updated 8 months ago
- Reading list for Multimodal Large Language Modelsβ69Aug 17, 2023Updated 2 years ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteriaβ73Oct 16, 2024Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,230Jun 28, 2024Updated last year
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.β756Jan 22, 2026Updated last month
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,372Feb 26, 2026Updated 3 weeks ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training β¦β69May 7, 2025Updated 10 months ago
- Project for SNARE benchmarkβ11Jun 5, 2024Updated last year
- β487Sep 25, 2024Updated last year
- Efficient Multimodal Large Language Models: A Surveyβ388Apr 29, 2025Updated 10 months ago
- β15May 7, 2024Updated last year
- Collection of AWESOME vision-language models for vision tasksβ3,096Oct 14, 2025Updated 5 months ago
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 πβ3,568May 7, 2025Updated 10 months ago
- β4,591Sep 14, 2025Updated 6 months ago
- VisionLLM Seriesβ1,137Feb 27, 2025Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- Implementation of the Benchmark Approaches for Medical Instructional Video Classification (MedVidCL) and Medical Video Question Answeringβ¦β31Jan 31, 2023Updated 3 years ago
- Large language model of Medical AI, General Medical AI (GMAI)β17Jan 30, 2024Updated 2 years ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,578Aug 12, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,189Nov 18, 2024Updated last year
- open llm for multimodalβ20May 18, 2023Updated 2 years ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)β42Dec 16, 2025Updated 3 months ago
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,116Updated this week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ336Jul 17, 2024Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ155Apr 30, 2024Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ459Dec 2, 2024Updated last year
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β843May 14, 2025Updated 10 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β509Mar 18, 2025Updated last year
- Offical respority for Gait Recogniton with Drones: A benchmark (TMM 2023)β10Feb 2, 2024Updated 2 years ago
- β547Nov 7, 2024Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.β924Dec 18, 2023Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)β295Jun 6, 2023Updated 2 years ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"β68Dec 8, 2025Updated 3 months ago
- A curated list of awesome Multimodal studies.β317Mar 11, 2026Updated last week
- A Survey on multimodal learning research.β332Aug 22, 2023Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β361Jan 14, 2025Updated last year
- Code for ICML 2023 paper "When and How Does Known Class Help Discover Unknown Ones? Provable Understandings Through Spectral Analysis"β14Jun 24, 2023Updated 2 years ago