PKU-YuanGroup / MoE-LLaVA
Mixture-of-Experts for Large Vision-Language Models
☆2,058Updated last month
Alternatives and similar repositories for MoE-LLaVA:
Users that are interested in MoE-LLaVA are comparing it to the libraries listed below
- ☆3,316Updated 3 months ago
- A family of lightweight multimodal models.☆979Updated 2 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,124Updated last month
- Emu Series: Generative Multimodal Models from BAAI☆1,675Updated 4 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,117Updated 9 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,228Updated 4 months ago
- Next-Token Prediction is All You Need☆1,976Updated 3 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,735Updated this week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆721Updated 11 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆2,556Updated 9 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,731Updated last week
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆773Updated 10 months ago
- VisionLLM Series☆983Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,831Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆715Updated this week
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,040Updated this week
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆858Updated last month
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆4,178Updated last week
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,236Updated 8 months ago
- An Open-source Toolkit for LLM Development☆2,747Updated 2 weeks ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆614Updated 3 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,400Updated this week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆715Updated 5 months ago
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,309Updated 8 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆824Updated 6 months ago
- ☆1,784Updated 7 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆583Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,642Updated 5 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,342Updated last month
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,913Updated 5 months ago