NVlabs / VILALinks
VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and cloud.
☆3,280Updated last week
Alternatives and similar repositories for VILA
Users that are interested in VILA are comparing it to the libraries listed below
Sorting:
- ☆3,873Updated last week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆2,446Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,908Updated 7 months ago
- Witness the aha moment of VLM with less than $3.☆3,706Updated last week
- Next-Token Prediction is All You Need☆2,134Updated 2 months ago
- LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆1,994Updated 2 weeks ago
- Mixture-of-Experts for Large Vision-Language Models☆2,173Updated 5 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,221Updated last year
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,358Updated 2 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆8,226Updated this week
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,887Updated last week
- Famous Vision Language Models and Their Architectures☆843Updated 3 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,167Updated 4 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,005Updated 10 months ago
- A suite of image and video neural tokenizers☆1,627Updated 3 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,172Updated this week
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆923Updated 2 months ago
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,839Updated this week
- Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,121Updated 3 weeks ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,254Updated 5 months ago
- A family of lightweight multimodal models.☆1,018Updated 6 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,028Updated 3 weeks ago
- 🔥🔥🔥Latest Papers, Codes and Datasets on Vid-LLMs.☆2,339Updated 3 weeks ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆828Updated 2 weeks ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,290Updated last month
- VisionLLM Series☆1,066Updated 3 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,166Updated this week
- Codebase for Aria - an Open Multimodal Native MoE☆1,041Updated 4 months ago
- VideoSys: An easy and efficient system for video generation☆1,967Updated 2 months ago
- 4M: Massively Multimodal Masked Modeling☆1,721Updated last week