Frontier Multimodal Foundation Models for Image and Video Understanding
☆1,148Aug 14, 2025Updated 8 months ago
Alternatives and similar repositories for VideoLLaMA3
Users that are interested in VideoLLaMA3 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆1,297Jan 23, 2025Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆519Nov 18, 2025Updated 5 months ago
- ☆81Nov 24, 2024Updated last year
- The code for PixelRefer & VideoRefer☆349Nov 16, 2025Updated 5 months ago
- ☆4,645Apr 15, 2026Updated 2 weeks ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- 🔥🔥First-ever hour scale video understanding models☆621Jul 14, 2025Updated 9 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆54Jul 11, 2025Updated 9 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆540Aug 14, 2025Updated 8 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆858Dec 14, 2025Updated 4 months ago
- 🧠 VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)☆326Feb 8, 2026Updated 2 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,254Mar 25, 2026Updated last month
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆198Mar 17, 2025Updated last year
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆106Nov 28, 2024Updated last year
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆19,105Jan 30, 2026Updated 3 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,145Jun 4, 2024Updated last year
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆152Aug 22, 2025Updated 8 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆266Oct 18, 2025Updated 6 months ago
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆143Aug 21, 2025Updated 8 months ago
- Long Context Transfer from Language to Vision☆403Mar 18, 2025Updated last year
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,789Mar 12, 2026Updated last month
- [ICML 2025] Official PyTorch implementation of LongVU☆425May 8, 2025Updated 11 months ago
- 🔥🔥🔥 [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.☆3,164Mar 28, 2026Updated last month
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,483Dec 3, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆762Dec 8, 2025Updated 4 months ago
- Official repository for the paper PLLaVA☆671Jul 28, 2024Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆284Jan 16, 2025Updated last year
- ✨✨[NeurIPS 2025] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction☆2,509Mar 28, 2025Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,260Apr 13, 2026Updated 3 weeks ago
- Official Repo For Pixel-LLM Codebase: Sa2VA (Arxiv-25), SAMTok (CVPR-26), VRT, SaSaSa2VA (1-st solution for LSVOS)☆1,592Feb 27, 2026Updated 2 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆296Jun 13, 2024Updated last year
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆84Dec 30, 2024Updated last year
- [AAAI 26 Demo] Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal P…☆65Jan 27, 2026Updated 3 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [CVPR 2026] MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆218Sep 26, 2025Updated 7 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,950Mar 12, 2026Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆10,003Sep 22, 2025Updated 7 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆141Jul 28, 2025Updated 9 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆384Feb 23, 2025Updated last year
- Witness the aha moment of VLM with less than $3.☆4,056May 19, 2025Updated 11 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆419May 8, 2025Updated 11 months ago