yeliudev / VideoMindLinks
π‘ VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning
β250Updated 2 weeks ago
Alternatives and similar repositories for VideoMind
Users that are interested in VideoMind are comparing it to the libraries listed below
Sorting:
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"β232Updated 2 months ago
- π₯π₯First-ever hour scale video understanding modelsβ541Updated 2 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"β129Updated 9 months ago
- [ICML 2025] Official PyTorch implementation of LongVUβ397Updated 4 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"β134Updated 2 weeks ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Modelsβ263Updated last year
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"β268Updated 2 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuningβ184Updated 3 weeks ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with gβ¦β464Updated last month
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Modelsβ74Updated 3 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)β253Updated 9 months ago
- β123Updated last month
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understandingβ81Updated 4 months ago
- **Deep Video Discovery (DVD)** is a deep-research style question answering agent designed for understanding extra-long videos.β71Updated last month
- Long Context Transfer from Language to Visionβ393Updated 5 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β251Updated last month
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.β454Updated last week
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reactionβ131Updated 5 months ago
- [ACL 2025 π₯] Rethinking Step-by-step Visual Reasoning in LLMsβ305Updated 3 months ago
- Pixel-Level Reasoning Model trained with RLβ204Updated last week
- [CVPR 2025] EgoLife: Towards Egocentric Life Assistantβ327Updated 5 months ago
- β271Updated last month
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modelingβ465Updated 3 months ago
- MiMo-VLβ538Updated 3 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β685Updated last week
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridgesβ76Updated 6 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ102Updated 3 months ago
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (CVPR 2025)β264Updated last week
- UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learningβ139Updated 3 months ago
- A Simple Framework of Small-scale LMMs for Video Understandingβ91Updated 3 months ago