yeliudev / VideoMindLinks
π‘ VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning
β205Updated last week
Alternatives and similar repositories for VideoMind
Users that are interested in VideoMind are comparing it to the libraries listed below
Sorting:
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"β130Updated 6 months ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"β181Updated 5 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"β100Updated last month
- π₯π₯First-ever hour scale video understanding modelsβ331Updated this week
- [ICML 2025] Official PyTorch implementation of LongVUβ378Updated 3 weeks ago
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (CVPR 2025)β210Updated this week
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuningβ140Updated 2 weeks ago
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ286Updated 2 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ220Updated 8 months ago
- Long Context Transfer from Language to Visionβ375Updated 2 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β204Updated 5 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reactionβ110Updated 2 months ago
- β186Updated 10 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ61Updated last month
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β234Updated 3 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β546Updated last week
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Modelsβ62Updated 2 weeks ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β379Updated last month
- β205Updated last week
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)β164Updated 10 months ago
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"β192Updated 3 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Enginesβ126Updated 6 months ago
- [ICLR 2025] VideoGrain: This repo is the official implementation of "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video β¦β130Updated 2 months ago
- β76Updated 2 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ72Updated last week
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ421Updated last week
- β173Updated 3 months ago
- MovieAgent: Automated Movie Generation via Multi-Agent CoT Planningβ196Updated 2 months ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videosβ77Updated 5 months ago
- HumanOmniβ168Updated 2 months ago