NVlabs / VILA
VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and cloud.
☆2,835Updated this week
Alternatives and similar repositories for VILA:
Users that are interested in VILA are comparing it to the libraries listed below
- ☆3,316Updated 3 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,913Updated 5 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,831Updated 2 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆2,556Updated 9 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,058Updated last month
- Next-Token Prediction is All You Need☆1,976Updated 3 months ago
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,228Updated 4 months ago
- 4M: Massively Multimodal Masked Modeling☆1,671Updated 3 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,236Updated 8 months ago
- Mora: More like Sora for Generalist Video Generation☆1,543Updated 3 months ago
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆4,328Updated this week
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆6,880Updated last month
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,124Updated last month
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,731Updated last week
- PyTorch code and models for V-JEPA self-supervised learning from video.☆2,747Updated 5 months ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,735Updated this week
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,157Updated 2 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆599Updated 2 months ago
- Codebase for Aria - an Open Multimodal Native MoE☆978Updated last week
- A suite of image and video neural tokenizers☆1,524Updated last week
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,040Updated this week
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.☆1,250Updated last month
- VideoSys: An easy and efficient system for video generation☆1,891Updated 3 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,675Updated 4 months ago
- LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆1,768Updated last week
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆1,005Updated last week
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆4,178Updated last week
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆864Updated last week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,466Updated 5 months ago
- ☆1,973Updated last week