NVlabs / VILA
VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)
☆1,999Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for VILA
- ☆2,898Updated last month
- Mixture-of-Experts for Large Vision-Language Models☆1,989Updated 6 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,840Updated 3 months ago
- Janus-Series: Unified Multimodal Understanding and Generation Models☆1,084Updated last week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,763Updated 3 weeks ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆877Updated last week
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,122Updated 2 months ago
- Next-Token Prediction is All You Need☆1,824Updated 3 weeks ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆2,077Updated 6 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,361Updated this week
- A family of lightweight multimodal models.☆933Updated this week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆705Updated 9 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,324Updated 3 months ago
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆3,153Updated last month
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,003Updated last month
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,525Updated last month
- ☆763Updated this week
- Strong and Open Vision Language Assistant for Mobile Devices☆1,041Updated 7 months ago
- 🔥🔥🔥Latest Papers, Codes and Datasets on Vid-LLMs.☆1,546Updated last month
- Reaching LLaMA2 Performance with 0.1M Dollars☆960Updated 3 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,211Updated 6 months ago
- 4M: Massively Multimodal Masked Modeling☆1,607Updated last month
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆726Updated 7 months ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆675Updated 3 months ago
- VisionLLM Series☆924Updated last month
- Emu Series: Generative Multimodal Models from BAAI☆1,662Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆6,055Updated this week
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,255Updated this week
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆813Updated 4 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆526Updated 2 weeks ago