SkyworkAI / VitronLinks
NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
β578Updated last year
Alternatives and similar repositories for Vitron
Users that are interested in Vitron are comparing it to the libraries listed below
Sorting:
- π₯π₯First-ever hour scale video understanding modelsβ595Updated 5 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β855Updated last year
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"β580Updated 4 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ490Updated last month
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".β253Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ677Updated 11 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β507Updated 4 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.β246Updated 10 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β505Updated last year
- [ECCV 2024] Tokenize Anything via Promptingβ600Updated last year
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Modelβ341Updated last year
- [NeurIPS2025 Spotlight π₯ ] Official implementation of πΈ "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Languβ¦β261Updated last month
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ405Updated 7 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β782Updated 2 weeks ago
- Official repository for the paper PLLaVAβ676Updated last year
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β256Updated 2 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β277Updated last year
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ246Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMsβ406Updated last week
- Vision Manus: Your versatile Visual AI assistantβ304Updated 2 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understandingβ637Updated last year
- R1-onevision, a visual language model capable of deep CoT reasoning.β574Updated 8 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understandingβ292Updated 4 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β935Updated 4 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ390Updated last year
- β¨β¨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ699Updated 3 weeks ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ600Updated last year
- VisionLLM Seriesβ1,131Updated 10 months ago
- New generation of CLIP with fine grained discrimination capability, ICML2025β516Updated 2 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β879Updated last year