farewellthree / PPLLaVALinks
Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"
β131Updated last year
Alternatives and similar repositories for PPLLaVA
Users that are interested in PPLLaVA are comparing it to the libraries listed below
Sorting:
- π§ VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)β301Updated last week
- [ICML 2025] Official PyTorch implementation of LongVUβ420Updated 8 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.β166Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Modelsβ90Updated 8 months ago
- β185Updated 6 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)β172Updated last year
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ286Updated last year
- β82Updated 10 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β266Updated 3 months ago
- MovieAgent: Automated Movie Generation via Multi-Agent CoT Planningβ286Updated 10 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β269Updated 2 weeks ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Enginesβ129Updated last year
- β147Updated 6 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generationβ144Updated last year
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (CVPR 2025)β407Updated 3 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reactionβ163Updated 10 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"β574Updated last week
- β203Updated last year
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundiβ¦β52Updated 2 years ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understandingβ292Updated 5 months ago
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexibleβ116Updated 5 months ago
- Long Context Transfer from Language to Visionβ398Updated 10 months ago
- [ICLR 2025] VideoGrain: This repo is the official implementation of "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video β¦β160Updated 10 months ago
- Multimodal Models in Real Worldβ554Updated 11 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Modelsβ280Updated last year
- [AAAI 2025] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customizationβ227Updated 3 weeks ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β515Updated 5 months ago
- Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024)β348Updated 2 weeks ago
- ICML 2025 - Impossible Videosβ83Updated 6 months ago
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ382Updated 10 months ago