swordlidev / LLaVA-MRLinks
LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval
☆8Updated 7 months ago
Alternatives and similar repositories for LLaVA-MR
Users that are interested in LLaVA-MR are comparing it to the libraries listed below
Sorting:
- ☆12Updated 3 months ago
- [NeurIPS 2024] Mixture of Experts for Audio-Visual Learning☆15Updated 6 months ago
- This is a repository contains the implementation of our NeurIPS'24 paper "Temporal Sentence Grounding with Relevance Feedback in Videos"☆10Updated 7 months ago
- [ICLR 2025] Causal Graphical Models for Vision-Language Compositional Understanding☆9Updated 3 months ago
- Official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning"☆34Updated 5 months ago
- Papers of "A Survey on Large Multi-Modal Models from the Perspective of Input-Output Space Extension"☆10Updated 7 months ago
- ☆12Updated 6 months ago
- ☆14Updated 7 months ago
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆38Updated this week
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆104Updated 5 months ago
- [ICCV 2025] CorrCLIP: Reconstructing Patch Correlations in CLIP for Open-Vocabulary Semantic Segmentation☆10Updated this week
- The code for the paper "Efficient Self-Supervised Video Hashing with Selective State Spaces" (AAAI'25).☆18Updated 7 months ago
- [CBMI2024 Best Paper] Official repository of the paper "Is CLIP the main roadblock for fine-grained open-world perception?".☆27Updated 2 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆94Updated 7 months ago
- ☆80Updated 8 months ago
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆24Updated last week
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆29Updated 3 months ago
- Official Repository of Personalized Visual Instruct Tuning☆31Updated 4 months ago
- Official code for WACV 2024 paper, "Annotation-free Audio-Visual Segmentation"☆31Updated 9 months ago
- ☆70Updated 2 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆72Updated 9 months ago
- SAVEn-Vid: Synergistic Audio-Visual Integration for Enhanced Understanding in Long Video Context☆5Updated 6 months ago
- ☆32Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆56Updated 8 months ago
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-…☆39Updated 3 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 4 months ago
- [ICCV 2025] Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆71Updated 4 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆68Updated 2 months ago
- Official PyTorch Implementation of ParGo: Bridging Vision-Language with Partial and Global Views. (AAAI 2025)☆14Updated 6 months ago