Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
β365Mar 19, 2025Updated last year
Alternatives and similar repositories for Awesome_Multimodel_LLM
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Latest Advances on Multimodal Large Language Modelsβ17,568Updated this week
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β1,003Sep 27, 2025Updated 6 months ago
- Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.β505Jun 24, 2025Updated 9 months ago
- Reading list for Multimodal Large Language Modelsβ69Aug 17, 2023Updated 2 years ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteriaβ74Oct 16, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,230Jun 28, 2024Updated last year
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.β755Apr 1, 2026Updated last week
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,392Feb 26, 2026Updated last month
- Project for SNARE benchmarkβ11Jun 5, 2024Updated last year
- β490Sep 25, 2024Updated last year
- Efficient Multimodal Large Language Models: A Surveyβ387Apr 29, 2025Updated 11 months ago
- β15May 7, 2024Updated last year
- Collection of AWESOME vision-language models for vision tasksβ3,106Oct 14, 2025Updated 5 months ago
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 πβ3,577May 7, 2025Updated 11 months ago
- End-to-end encrypted cloud storage - Proton Drive β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- β4,624Sep 14, 2025Updated 6 months ago
- VisionLLM Seriesβ1,140Feb 27, 2025Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- Implementation of the Benchmark Approaches for Medical Instructional Video Classification (MedVidCL) and Medical Video Question Answeringβ¦β31Jan 31, 2023Updated 3 years ago
- Large language model of Medical AI, General Medical AI (GMAI)β17Jan 30, 2024Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,192Nov 18, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,652Aug 12, 2024Updated last year
- open llm for multimodalβ20May 18, 2023Updated 2 years ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)β42Dec 16, 2025Updated 3 months ago
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,135Mar 28, 2026Updated last week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ338Jul 17, 2024Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ155Apr 30, 2024Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ460Dec 2, 2024Updated last year
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β845May 14, 2025Updated 10 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β508Mar 18, 2025Updated last year
- β548Nov 7, 2024Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.β926Dec 18, 2023Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)β297Jun 6, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A curated list of awesome Multimodal studies.β320Mar 11, 2026Updated 3 weeks ago
- A Survey on multimodal learning research.β333Aug 22, 2023Updated 2 years ago
- Explore VLM-Eval, a framework for evaluating Video Large Language Models, enhancing your video analysis with cutting-edge AI technology.β37Jan 20, 2024Updated 2 years ago
- Code for ICML 2023 paper "When and How Does Known Class Help Discover Unknown Ones? Provable Understandings Through Spectral Analysis"β14Jun 24, 2023Updated 2 years ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"β70Dec 8, 2025Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β363Jan 14, 2025Updated last year
- SVIT: Scaling up Visual Instruction Tuningβ166Jun 20, 2024Updated last year