hlchen23 / ADPN-MMLinks
Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Grounding"
☆49Updated last year
Alternatives and similar repositories for ADPN-MM
Users that are interested in ADPN-MM are comparing it to the libraries listed below
Sorting:
- ☆76Updated 2 months ago
- ☆186Updated 10 months ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆26Updated last year
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆181Updated 5 months ago
- Video dataset dedicated to portrait-mode video recognition.☆52Updated 5 months ago
- [CVPR 2024] Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection☆94Updated 10 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆71Updated 7 months ago
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆38Updated last month
- A Simple Framework of Small-scale LMMs for Video Understanding☆65Updated 2 weeks ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆77Updated 5 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆61Updated last month
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆83Updated 11 months ago
- official code for paper: Exploring Domain Incremental Video Highlights Detection with the LiveFood Benchmark☆37Updated last year
- ☆175Updated this week
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆93Updated 11 months ago
- ☆52Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated 9 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆111Updated 2 months ago
- Precision Search through Multi-Style Inputs☆69Updated last month
- Narrative movie understanding benchmark☆70Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆62Updated 2 weeks ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆94Updated 6 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆100Updated 9 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆82Updated last month
- Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Gr…☆132Updated 9 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆68Updated 3 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆147Updated 2 weeks ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆115Updated last month
- Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"☆31Updated 5 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆100Updated last month