jchenghu / ExpansionNet_v2
Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"
☆87Updated 2 months ago
Alternatives and similar repositories for ExpansionNet_v2:
Users that are interested in ExpansionNet_v2 are comparing it to the libraries listed below
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆189Updated last year
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆191Updated last year
- CaMEL: Mean Teacher Learning for Image Captioning. ICPR 2022☆29Updated 2 years ago
- Pytorch implementation of image captioning using transformer-based model.☆65Updated last year
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆61Updated 2 years ago
- Official Code for 'RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words' (CVPR 2021)☆122Updated 2 years ago
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆100Updated last year
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆76Updated 3 years ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆152Updated 11 months ago
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆67Updated 9 months ago
- [ICCV 2023] Accurate and Fast Compressed Video Captioning☆39Updated last year
- ☆84Updated 2 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆130Updated 2 years ago
- Official implementation of "ConZIC: Controllable Zero-shot Image Captioning by Sampling-Based Polishing"☆73Updated last year
- [CVPR23] A cascaded diffusion captioning model with a novel semantic-conditional diffusion process that upgrades conventional diffusion m…☆60Updated 9 months ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 7 months ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆213Updated 2 years ago
- Image Captioning Using Transformer☆262Updated 2 years ago
- [AAAI 2023 Oral] VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning☆66Updated last year
- Implementation of the paper CPTR : FULL TRANSFORMER NETWORK FOR IMAGE CAPTIONING☆30Updated 2 years ago
- Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation. (CVPR 2023)☆60Updated 2 weeks ago
- A paper list of image captioning.☆22Updated 2 years ago
- A curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)☆110Updated 2 years ago
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆63Updated 9 months ago
- Research code for CVPR 2022 paper "SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning"☆239Updated 2 years ago
- https://layer6ai-labs.github.io/xpool/☆120Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆272Updated 2 years ago
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆292Updated 11 months ago
- Towards Local Visual Modeling for Image Captioning☆27Updated last year