Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
β369Mar 19, 2025Updated last year
Alternatives and similar repositories for Awesome_Multimodel_LLM
Users that are interested in Awesome_Multimodel_LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Latest Advances on Multimodal Large Language Modelsβ17,705Updated this week
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β1,013Sep 27, 2025Updated 7 months ago
- Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.β504Jun 24, 2025Updated 10 months ago
- Reading list for Multimodal Large Language Modelsβ69Aug 17, 2023Updated 2 years ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteriaβ76Oct 16, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).β1,230Jun 28, 2024Updated last year
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.β757Apr 6, 2026Updated 3 weeks ago
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,405Apr 19, 2026Updated last week
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training β¦β74May 7, 2025Updated 11 months ago
- Project for SNARE benchmarkβ11Jun 5, 2024Updated last year
- β492Sep 25, 2024Updated last year
- Efficient Multimodal Large Language Models: A Surveyβ386Apr 29, 2025Updated last year
- β15May 7, 2024Updated last year
- Collection of AWESOME vision-language models for vision tasksβ3,115Oct 14, 2025Updated 6 months ago
- Open source password manager - Proton Pass β’ AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 πβ3,601Apr 20, 2026Updated last week
- β4,640Apr 15, 2026Updated 2 weeks ago
- VisionLLM Seriesβ1,143Feb 27, 2025Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β384Feb 23, 2025Updated last year
- Large language model of Medical AI, General Medical AI (GMAI)β17Jan 30, 2024Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ11,212Nov 18, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,722Aug 12, 2024Updated last year
- open llm for multimodalβ20May 18, 2023Updated 2 years ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)β43Dec 16, 2025Updated 4 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- π₯π₯π₯ [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.β3,164Mar 28, 2026Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ338Jul 17, 2024Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Modelsβ156Apr 30, 2024Updated last year
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ460Dec 2, 2024Updated last year
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β844May 14, 2025Updated 11 months ago
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation β¦β508Mar 18, 2025Updated last year
- Offical respority for Gait Recogniton with Drones: A benchmark (TMM 2023)β10Feb 2, 2024Updated 2 years ago
- β548Nov 7, 2024Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)β297Jun 6, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A curated list of prompt-based paper in computer vision and vision-language learning.β925Dec 18, 2023Updated 2 years ago
- A curated list of awesome Multimodal studies.β324Mar 11, 2026Updated last month
- A Survey on multimodal learning research.β334Aug 22, 2023Updated 2 years ago
- Explore VLM-Eval, a framework for evaluating Video Large Language Models, enhancing your video analysis with cutting-edge AI technology.β36Jan 20, 2024Updated 2 years ago
- Code for ICML 2023 paper "When and How Does Known Class Help Discover Unknown Ones? Provable Understandings Through Spectral Analysis"β14Jun 24, 2023Updated 2 years ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"β71Dec 8, 2025Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β363Jan 14, 2025Updated last year