kahnchana / mvuLinks
π€ [ICLR'25] Multimodal Video Understanding Framework (MVU)
β52Updated 11 months ago
Alternatives and similar repositories for mvu
Users that are interested in mvu are comparing it to the libraries listed below
Sorting:
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"β33Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervisionβ72Updated last year
- Egocentric Video Understanding Dataset (EVUD)β32Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"β106Updated last year
- β130Updated 9 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusionβ55Updated 6 months ago
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23β102Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignmentβ64Updated 5 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"β152Updated 6 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Modelβ80Updated last month
- β41Updated 9 months ago
- β106Updated last year
- β96Updated 6 months ago
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]β76Updated 6 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Mindsβ96Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generationβ¦β39Updated 10 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistantβ68Updated last year
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"β128Updated 3 weeks ago
- β71Updated last year
- β42Updated 7 months ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"β29Updated 6 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedbackβ76Updated last year
- An open source implementation of CLIP (With TULIP Support)β165Updated 8 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]β102Updated last year
- β113Updated 5 months ago
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Modelsβ139Updated 4 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridgesβ80Updated 10 months ago
- β191Updated last year
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuningβ157Updated 5 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn datasetβ77Updated 4 months ago