mbzuai-oryx / LLaVA-pp
π₯π₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
β794Updated 2 months ago
Related projects: β
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ535Updated last month
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMsβ719Updated this week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generationβ644Updated last month
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.β1,904Updated this week
- Official repository for the paper PLLaVAβ551Updated last month
- β2,395Updated this week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ692Updated 7 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β688Updated last month
- SEED-Story: Multimodal Long Story Generation with Large Language Modelβ692Updated last month
- A family of lightweight multimodal models.β877Updated 2 weeks ago
- Mixture-of-Experts for Large Vision-Language Modelsβ1,911Updated 4 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8Bβ1,976Updated 2 weeks ago
- β544Updated this week
- DeepSeek-VL: Towards Real-World Vision-Language Understandingβ2,007Updated 4 months ago
- MobiLlama : Small Language Model tailored for edge devicesβ583Updated 6 months ago
- Multimodal Models in Real Worldβ372Updated 2 months ago
- HPT - Open Multimodal LLMs from HyperGAIβ309Updated 3 months ago
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Outputβ2,449Updated 2 weeks ago
- β¨β¨VITA: Towards Open-Source Interactive Omni Multimodal LLMβ751Updated this week
- mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understandingβ1,318Updated last week
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β842Updated 6 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarksβ1,018Updated this week
- Pandora: Towards General World Model with Natural Language Actions and Video Statesβ460Updated this week
- VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin andβ¦β1,786Updated last week
- The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.β1,069Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Modelβ283Updated last month
- Strong and Open Vision Language Assistant for Mobile Devicesβ971Updated 5 months ago
- Latte: Latent Diffusion Transformer for Video Generation.β1,637Updated last week
- Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)β587Updated 2 weeks ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMsβ497Updated last year