Research Trends in LLM-guided Multimodal Learning.
☆356Oct 17, 2023Updated 2 years ago
Alternatives and similar repositories for Awesome-Multimodal-LLM
Users that are interested in Awesome-Multimodal-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Latest Advances on Multimodal Large Language Models☆17,505Mar 20, 2026Updated last week
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆872Mar 8, 2025Updated last year
- Reading list for Multimodal Large Language Models☆69Aug 17, 2023Updated 2 years ago
- An open-source framework for training large multimodal models.☆4,083Aug 31, 2024Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Jun 20, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆807Jul 8, 2024Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆360Dec 18, 2023Updated 2 years ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,101Oct 5, 2023Updated 2 years ago
- Multimodal-GPT☆1,517Jun 4, 2023Updated 2 years ago
- A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the p…☆14Feb 5, 2024Updated 2 years ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆642Sep 21, 2024Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,540Apr 2, 2025Updated 11 months ago
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆316Aug 10, 2023Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAI☆1,772Jan 12, 2026Updated 2 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆523Jan 27, 2024Updated 2 years ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Jun 25, 2024Updated last year
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,592Jan 1, 2025Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,194Nov 18, 2024Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆954Mar 19, 2025Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆255Aug 21, 2025Updated 7 months ago
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,353Mar 5, 2024Updated 2 years ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,603Aug 12, 2024Updated last year
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆260Apr 14, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆755Updated this week
- A Survey on multimodal learning research.☆332Aug 22, 2023Updated 2 years ago
- ☆15Dec 10, 2021Updated 4 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,557Apr 24, 2024Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆925Dec 18, 2023Updated 2 years ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆362Jan 14, 2025Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆112Aug 21, 2025Updated 7 months ago
- [Paperlist] Awesome paper list of multimodal dialog, including methods, datasets and metrics☆37Jan 22, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Sep 21, 2023Updated 2 years ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆59Jun 27, 2023Updated 2 years ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,583Aug 7, 2024Updated last year
- A huge dataset for Document Visual Question Answering☆21Jul 29, 2024Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆295Jun 6, 2023Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,136Jun 4, 2024Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆323Jan 20, 2025Updated last year