PKU-YuanGroup / MoE-LLaVA
Mixture-of-Experts for Large Vision-Language Models
☆1,971Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for MoE-LLaVA
- ☆2,815Updated 3 weeks ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆2,966Updated last month
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆2,064Updated 6 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,294Updated this week
- Next-Token Prediction is All You Need☆1,786Updated 2 weeks ago
- A family of lightweight multimodal models.☆928Updated 2 weeks ago
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,509Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,658Updated last month
- VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and…☆1,968Updated last week
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆717Updated 7 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆703Updated 9 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,032Updated 6 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆847Updated this week
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,100Updated 2 months ago
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆3,931Updated 2 weeks ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,823Updated 3 months ago
- A Framework of Small-scale Large Multimodal Models☆635Updated 3 weeks ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆807Updated 3 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆552Updated last month
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,749Updated last week
- An Open-source Toolkit for LLM Development☆2,717Updated 5 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,206Updated 6 months ago
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆771Updated 2 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,245Updated 2 weeks ago
- 🔥🔥🔥Latest Papers, Codes and Datasets on Vid-LLMs.☆1,502Updated last month
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,034Updated 6 months ago
- mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding☆1,514Updated last month
- Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation☆913Updated last week
- Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.☆1,542Updated 2 months ago
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.☆1,214Updated last month