hlchen23 / ADPN-MMLinks
Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Grounding"
☆52Updated 2 years ago
Alternatives and similar repositories for ADPN-MM
Users that are interested in ADPN-MM are comparing it to the libraries listed below
Sorting:
- ☆201Updated last year
- ☆82Updated 10 months ago
- Video dataset dedicated to portrait-mode video recognition.☆55Updated 3 months ago
- ☆185Updated 5 months ago
- Precision Search through Multi-Style Inputs☆73Updated 5 months ago
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆79Updated 3 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆144Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆28Updated 2 years ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆89Updated last year
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated last year
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆165Updated 11 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆260Updated 3 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆41Updated 6 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆88Updated 8 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆87Updated 8 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆38Updated 8 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆341Updated last year
- [CVPR 2024] Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection☆111Updated last year
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆171Updated last year
- official code for paper: Exploring Domain Incremental Video Highlights Detection with the LiveFood Benchmark☆40Updated 2 years ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆131Updated last year
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆129Updated last year
- Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"☆41Updated last year
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆145Updated 11 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆298Updated 3 months ago
- ☆96Updated 10 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆268Updated last month
- ☆146Updated 5 months ago
- Narrative movie understanding benchmark☆76Updated 7 months ago