VideoAnalysis / EDUVSUMLinks
EDUVSUM is a multimodal neural architecture that utilizes state-of-the-art audio, visual and textual features to identify important temporal segments in educational videos.
☆22Updated last year
Alternatives and similar repositories for EDUVSUM
Users that are interested in EDUVSUM are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Multi-modal Dense Video Captioning (CVPR 2020 Workshops)☆144Updated 2 years ago
- ☆251Updated 2 years ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆53Updated 2 years ago
- Using VideoBERT to tackle video prediction☆130Updated 4 years ago
- ☆16Updated 4 years ago
- MUSIC-AVQA, CVPR2022 (ORAL)☆88Updated 2 years ago
- PyTorch implementation of HANet: Hierarchical Alignment Networks for Video-Text Retrieval (ACM MM 2021).☆47Updated 4 years ago
- Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrie…☆88Updated 2 years ago
- [arXiv22] Disentangled Representation Learning for Text-Video Retrieval☆96Updated 3 years ago
- Source code for "Bi-modal Transformer for Dense Video Captioning" (BMVC 2020)☆228Updated 2 years ago
- Learning Interactions and Relationships between Movie Characters (CVPR'20)☆21Updated 2 years ago
- Multimodal short video classification task, integrating video, image, audio and text modes for short video classification☆19Updated 5 years ago
- Official implementation of AdaMML. https://arxiv.org/abs/2105.05165.☆51Updated 3 years ago
- Code and benchmarks for the Semantic Video Retrieval Task☆53Updated 2 years ago
- Multi-Modal Transformer for Video Retrieval☆260Updated 10 months ago
- Video Feature Extractor for S3D-HowTo100M☆29Updated 4 years ago
- UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or …☆225Updated last year
- Source code of our MM'22 paper Partially Relevant Video Retrieval☆54Updated 9 months ago
- [ECCV 2020] PyTorch code of MMT (a multimodal transformer captioning model) on TVCaption dataset☆90Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆360Updated last year
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- Code for paper, "TL;DW? Summarizing Instructional Videos with Task Relevance & Cross-Modal Saliency" ECCV 2022☆39Updated 2 years ago
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated 2 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 4 years ago
- ☆16Updated last year
- Easy to use video deep features extractor☆320Updated 5 years ago
- Narrative movie understanding benchmark☆77Updated 2 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆298Updated 8 months ago
- (TIP'2023) Concept-Aware Video Captioning: Describing Videos with Effective Prior Information☆29Updated 8 months ago
- Implementation of the Benchmark Approaches for Medical Instructional Video Classification (MedVidCL) and Medical Video Question Answering…☆28Updated 2 years ago