Sejong-VLI / V2T-Action-Graph-JKSUCIS-2023
The implementation of a paper entitled "Action Knowledge for Video Captioning with Graph Neural Networks" (JKSUCIS 2023).
☆13Updated 2 years ago
Alternatives and similar repositories for V2T-Action-Graph-JKSUCIS-2023:
Users that are interested in V2T-Action-Graph-JKSUCIS-2023 are comparing it to the libraries listed below
- [CVPR2022] Official code for Hierarchical Modular Network for Video Captioning. Our proposed HMN is implemented with PyTorch.☆52Updated 2 years ago
- A Video-to-Text Framework☆10Updated last year
- Source code of our TCSVT'22 paper Reading-strategy Inspired Visual Representation Learning for Text-to-Video Retrieval☆19Updated 3 years ago
- Temporal Sentence Grounding in Videos / Natural Language Video Localization / Video Moment Retrieval的相关工作☆28Updated 3 years ago
- The first unofficial implementation of CLIP4Caption: CLIP for Video Caption (ACMMM 2021)☆14Updated 2 years ago
- The code of IJCAI22 paper "GL-RG: Global-Local Representation Granularity for Video Captioning".☆18Updated last year
- source code of our MGPN in SIGIR 2022☆18Updated 2 years ago
- (TIP'2023) Concept-Aware Video Captioning: Describing Videos with Effective Prior Information☆27Updated 3 months ago
- [ICCV 2023 CLVL Workshop] Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts☆12Updated 3 months ago
- Official pytorch implementation of "Explore-And-Match: Bridging Proposal-Based and Proposal-Free With Transformer for Sentence Grounding …☆42Updated 2 years ago
- [arXiv22] Disentangled Representation Learning for Text-Video Retrieval☆95Updated 3 years ago
- ☆13Updated 3 years ago
- Dual Learning with Dynamic Knowledge Distillation for Partially Relevant Video Retrieval☆16Updated 3 months ago
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆50Updated last year
- [ICCV 2023] Accurate and Fast Compressed Video Captioning☆41Updated last year
- Scanning Only Once: An End-to-end Framework for Fast Temporal Grounding in Long Videos☆23Updated 9 months ago
- ☆7Updated last year
- [ICCV 2023] The official PyTorch implementation of the paper: "Localizing Moments in Long Video Via Multimodal Guidance"☆19Updated 6 months ago
- Official pytorch implementation of the AAAI 2021 paper "Semantic Grouping Network for Video Captioning"☆51Updated 3 years ago
- Repository of proposal-free temporal moment localization work☆33Updated 10 months ago
- CPL: Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning☆62Updated last year
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆63Updated 10 months ago
- https://layer6ai-labs.github.io/xpool/☆122Updated last year
- Entity-Aware and Motion-Aware Transformers for Language-driven Action Localization(IJCAI-22)☆12Updated 2 years ago
- "Video Moment Retrieval from Text Queries via Single Frame Annotation" in SIGIR 2022☆69Updated 2 years ago
- ☆33Updated last year
- ☆14Updated last year
- Video Graph Transformer for Video Question Answering (ECCV'22)☆47Updated last year
- Weakly Supervised Video Moment Localisation with Contrastive Negative Sample Mining☆26Updated 3 years ago
- Reproduce of 'Weakly Supervised Coupled Networks for Visual Sentiment Analysis'☆14Updated 5 years ago