GeWu-Lab / TSPMLinks
Official repository for "Boosting Audio Visual Question Answering via Key Semantic-Aware Cues" in ACM MM 2024.
☆16Updated last year
Alternatives and similar repositories for TSPM
Users that are interested in TSPM are comparing it to the libraries listed below
Sorting:
- ☆37Updated 6 months ago
- [2025 CVPR] Towards Open-Vocabulary Audio-Visual Event Localization☆39Updated 10 months ago
- Dense-Localizing Audio-Visual Events in Untrimmed Videos: A Large-Scale Benchmark and Baseline (CVPR 2023)☆70Updated last month
- Official code for WACV 2024 paper, "Annotation-free Audio-Visual Segmentation"☆37Updated last year
- This repository contains code for AAAI2025 paper "Dense Audio-Visual Event Localization under Cross-Modal Consistency and Multi-Temporal …☆22Updated 5 months ago
- [CVPR 2024 Highlight] Official implementation of the paper: Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-…☆40Updated 9 months ago
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆58Updated last year
- Codebase for the paper: "TIM: A Time Interval Machine for Audio-Visual Action Recognition"☆52Updated last year
- ☆14Updated 2 years ago
- Question-Aware Gaussian Experts for Audio-Visual Question Answering -- Official Pytorch Implementation (CVPR'25, Highlight)☆26Updated 8 months ago
- Official repository of "Prompting Segmentation with Sound is Generalizable Audio-Visual Source Localizer", AAAI 2024☆25Updated last year
- Research code for NeurIPS 2023 paper "Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser"☆17Updated 6 months ago
- Unified Audio-Visual Perception for Multi-Task Video Localization☆30Updated last year
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation☆80Updated last month
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆56Updated 7 months ago
- Official Implementation of "Open-Vocabulary Audio-Visual Semantic Segmentation" [ACM MM 2024 Oral].☆35Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆64Updated last year
- ☆13Updated last year
- ☆27Updated 6 months ago
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆106Updated 2 years ago
- ☆13Updated last year
- MUSIC-AVQA, CVPR2022 (ORAL)☆94Updated 3 years ago
- [AAAI 2024] AVSegFormer: Audio-Visual Segmentation with Transformer☆73Updated 11 months ago
- ACM MM 2022 paper_AVQA: A Dataset for Audio-Visual Question Answering on Videos☆15Updated 2 years ago
- Official codebase for "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling".☆39Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆55Updated 2 years ago
- ☆18Updated 6 months ago
- Towards Long Form Audio-visual Video Understanding☆14Updated 3 weeks ago
- NeurIPS'2023 official implementation code☆68Updated 2 years ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆65Updated last year