X-PLUG / mPLUG-Owl
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
☆2,413Updated 3 weeks ago
Alternatives and similar repositories for mPLUG-Owl:
Users that are interested in mPLUG-Owl are comparing it to the libraries listed below
- Multimodal-GPT☆1,488Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,755Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,683Updated 4 months ago
- An open-source framework for training large multimodal models.☆3,822Updated 5 months ago
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆5,490Updated 6 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆782Updated last year
- ☆765Updated 7 months ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆2,921Updated 8 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated 2 months ago
- ☆767Updated 6 months ago
- An Open-source Toolkit for LLM Development☆2,758Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,646Updated 6 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,082Updated 2 months ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,160Updated last month
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆1,856Updated this week
- 🩹Editing large language models within 10 seconds⚡☆1,310Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,019Updated this week
- A family of lightweight multimodal models.☆987Updated 3 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆515Updated last year
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆505Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,388Updated last year
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆784Updated 10 months ago
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,542Updated last month
- Strong and Open Vision Language Assistant for Mobile Devices☆1,139Updated 10 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,690Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,807Updated 11 months ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- ☆903Updated 8 months ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,052Updated 8 months ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆292Updated last year