kahnchana / mvu
🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)
☆29Updated last month
Alternatives and similar repositories for mvu:
Users that are interested in mvu are comparing it to the libraries listed below
- Language Repository for Long Video Understanding☆31Updated 9 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆59Updated 8 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆44Updated 2 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆22Updated 3 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆41Updated 2 months ago
- ☆72Updated 3 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆57Updated 3 weeks ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆92Updated 4 months ago
- Code for paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆97Updated 3 weeks ago
- ☆41Updated last year
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆49Updated 2 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆56Updated 6 months ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆21Updated last month
- Official code for MotionBench (CVPR 2025)☆30Updated 2 weeks ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆35Updated 5 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆57Updated 9 months ago
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆28Updated 4 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆63Updated 6 months ago
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆74Updated 6 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆25Updated 5 months ago
- ☆39Updated 4 months ago
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆33Updated 3 weeks ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆65Updated 3 weeks ago
- Official implementation for CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆52Updated last month
- Official Implementation of ISR-DPO:Aligning Large Multimodal Models for Videos by Iterative Self-Retrospective DPO (AAAI'25)☆18Updated last month
- OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding? [CVPR 2025]☆38Updated 3 weeks ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆27Updated 8 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 3 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆38Updated last month