farewellthree / PPLLaVALinks
Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"
β130Updated last year
Alternatives and similar repositories for PPLLaVA
Users that are interested in PPLLaVA are comparing it to the libraries listed below
Sorting:
- π‘ VideoMind: A Chain-of-LoRA Agent for Long Video Reasoningβ297Updated 3 months ago
- [ICML 2025] Official PyTorch implementation of LongVUβ417Updated 8 months ago
- β82Updated 10 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Modelsβ87Updated 7 months ago
- β185Updated 5 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.β164Updated 11 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)β171Updated last year
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (CVPR 2025)β363Updated 2 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generationβ144Updated last year
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ287Updated last year
- MovieAgent: Automated Movie Generation via Multi-Agent CoT Planningβ277Updated 9 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β260Updated 2 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Enginesβ128Updated last year
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reactionβ154Updated 9 months ago
- β201Updated last year
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"β556Updated last month
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β268Updated last month
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexibleβ114Updated 5 months ago
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundiβ¦β52Updated 2 years ago
- Long Context Transfer from Language to Visionβ398Updated 9 months ago
- β145Updated 5 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Modelsβ280Updated last year
- [ICLR 2025] VideoGrain: This repo is the official implementation of "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video β¦β159Updated 9 months ago
- [AAAI 2025] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customizationβ224Updated 9 months ago
- Multimodal Models in Real Worldβ552Updated 10 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β510Updated 4 months ago
- β¨β¨[NeurIPS 2025] This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehensiβ¦β375Updated 2 months ago
- Code release for our NeurIPS 2024 Spotlight paper "GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing"β158Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructionsβ132Updated last year
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ373Updated 9 months ago