Paranioar / Awesome_Matching_Pretraining_TransferingLinks
The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
☆433Updated last month
Alternatives and similar repositories for Awesome_Matching_Pretraining_Transfering
Users that are interested in Awesome_Matching_Pretraining_Transfering are comparing it to the libraries listed below
Sorting:
- Summary of Related Research on Image-Text Matching☆71Updated 2 years ago
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆219Updated last year
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆267Updated last year
- Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021 (Oral)☆162Updated 2 months ago
- Cross-Modal-Real-valuded-Retrieval☆85Updated 2 years ago
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆71Updated 3 years ago
- 前沿论文持续更新--视频时刻定位 or 时域语言定位 or 视频片段检索。☆257Updated 2 years ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆120Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆373Updated 3 years ago
- Implementation of our CVPR2020 paper, Graph Structured Network for Image-Text Matching☆169Updated 5 years ago
- Implementation of our AAAI2022 paper, Show Your Faith: Cross-Modal Confidence-Aware Network for Image-Text Matching.☆36Updated 2 years ago
- A PyTorch reimplementation of bottom-up-attention models☆304Updated 3 years ago
- This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP☆414Updated 3 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆296Updated 2 years ago
- [arXiv22] Disentangled Representation Learning for Text-Video Retrieval☆97Updated 3 years ago
- Source codes of the paper "When CLIP meets Cross-modal Hashing Retrieval: A New Strong Baseline"☆32Updated 4 months ago
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆485Updated 2 years ago
- https://layer6ai-labs.github.io/xpool/☆129Updated 2 years ago
- A curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)☆112Updated 3 years ago
- PyTorch code for ICCV'19 paper "Visual Semantic Reasoning for Image-Text Matching"☆302Updated 5 years ago
- A curated list of deep learning resources for video-text retrieval.☆635Updated 2 years ago
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆362Updated last year
- Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".☆212Updated 5 years ago
- ☆76Updated 2 years ago
- Deep learning cross modal hashing in PyTorch☆108Updated 4 years ago
- Deep Multimodal Neural Architecture Search☆29Updated 5 years ago
- A Survey on multimodal learning research.☆333Updated 2 years ago
- code for our CVPR2020 paper "IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval"☆96Updated 5 years ago
- CAMP: Cross-Modal Adaptive Message Passing for Text-Image Retrieval☆127Updated 5 years ago
- a py3 lib for NLP & image-caption metrics : BLEU METEOR CIDEr ROUGE SPICE WMD☆14Updated 3 years ago