Shreyz-max / Video-CaptioningLinks
Video Captioning is an encoder decoder mode based on sequence to sequence learning
☆138Updated last year
Alternatives and similar repositories for Video-Captioning
Users that are interested in Video-Captioning are comparing it to the libraries listed below
Sorting:
- Source code for "Bi-modal Transformer for Dense Video Captioning" (BMVC 2020)☆230Updated 2 years ago
- COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning☆291Updated 3 years ago
- Video to Text: Natural language description generator for some given video. [Video Captioning]☆360Updated 3 years ago
- Image Captioning Using Transformer☆271Updated 3 years ago
- Video Grounding and Captioning☆332Updated 4 years ago
- PyTorch implementation of Multi-modal Dense Video Captioning (CVPR 2020 Workshops)☆144Updated 2 years ago
- Implemented 3 different architectures to tackle the Image Caption problem, i.e, Merged Encoder-Decoder - Bahdanau Attention - Transformer…☆40Updated 4 years ago
- A repository for extract CNN features from videos using pytorch☆70Updated 3 years ago
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆79Updated 4 years ago
- A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization" (IEEE IS…☆91Updated 2 years ago
- DSNet: A Flexible Detect-to-Summarize Network for Video Summarization☆220Updated 4 years ago
- Easy to use video deep features extractor☆322Updated 5 years ago
- Image Captioning using CNN and Transformer.☆55Updated 4 years ago
- Pytorch implementation of image captioning using transformer-based model.☆68Updated 2 years ago
- PyTorch implementation of the ACCV 2018-AIU2018 paper Video Summarization with Attention☆185Updated 3 years ago
- A Keras Implementation of Supervised Video Summarization using Attention Based Encoder-Decoder Networks☆30Updated 3 years ago
- Deep learning model for supervised video summarization called Multi Source Visual Attention (MSVA)☆47Updated last year
- A neural network to generate captions for an image using CNN and RNN with BEAM Search.☆309Updated 5 years ago
- Unsupervised video summarization with deep reinforcement learning (AAAI'18)☆503Updated 2 years ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆541Updated 3 years ago
- End-to-End Dense Video Captioning with Parallel Decoding (ICCV 2021)☆227Updated 2 years ago
- Video embeddings for retrieval with natural language queries☆342Updated 2 years ago
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆98Updated 2 years ago
- Using VideoBERT to tackle video prediction☆133Updated 4 years ago
- This repository contains the code for a video captioning system inspired by Sequence to Sequence -- Video to Text. This system takes as i…☆172Updated 6 years ago
- Pytorch implementation of VQA: Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf) using VQA v2.0 dataset for open-ended ta…☆21Updated 5 years ago
- Source code for the paper "Unsupervised Video Summarization via Multi-source Features" published at ICMR 2021☆21Updated 3 years ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆379Updated 3 years ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆247Updated 3 years ago
- Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and T…☆640Updated 11 months ago