gimpong / WWW22-HCQLinks
The code for the paper "Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval" (WWW'22, Oral).
☆17Updated 3 years ago
Alternatives and similar repositories for WWW22-HCQ
Users that are interested in WWW22-HCQ are comparing it to the libraries listed below
Sorting:
- A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval☆43Updated 3 years ago
- Learning Cross-Modal Retrieval with Noisy Labels (CVPR 2021, PyTorch Code)☆55Updated 2 years ago
- The code for the paper "Contrastive Quantization with Code Memory for Unsupervised Image Retrieval" (AAAI'22, Oral).☆38Updated 3 years ago
- [AAAI 2023] Contrastive Masked Autoencoders for Self-Supervised Video Hashing☆27Updated 2 years ago
- ☆77Updated 2 years ago
- Cross Modal Retrieval with Querybank Normalisation☆56Updated last year
- Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)☆134Updated last year
- ☆46Updated 3 years ago
- Official implementation of the Composed Image Retrieval using Pretrained LANguage Transformers (CIRPLANT) | ICCV 2021 - Image Retrieval o…☆40Updated last year
- Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback presented in CVPR 2021.☆66Updated 3 years ago
- The source code for the CVPR2020 paper "Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing".☆24Updated 5 years ago
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆71Updated 3 years ago
- Deep Evidential Learning with Noisy Correspondence for Cross-modal Retrieval ( ACM Multimedia 2022, Pytorch Code)☆47Updated last year
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated 2 years ago
- Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021 (Oral)☆162Updated 2 months ago
- Adaptive Offline Quintuplet Loss for Image-Text Matching (AOQ)☆34Updated 5 years ago
- Implementation of our ACMMM2019 paper, Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching☆39Updated 2 years ago
- Official repository of ICCV 2021 - Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models☆124Updated last month
- Source code of Universal Weighting Metric Learning for Cross-Modal Matching. The paper is accepted by CVPR2020.☆22Updated 3 years ago
- Code for journal paper "Learning Dual Semantic Relations with Graph Attention for Image-Text Matching", TCSVT, 2020.☆73Updated 3 years ago
- Source code of our MM'22 paper Partially Relevant Video Retrieval☆54Updated last year
- Deep Graph-neighbor Coherence Preserving Network for Unsupervised Cross-modal Hashing☆36Updated 4 years ago
- The source code of "Bit-aware Semantic Transformer Hashing for Multi-modal Retrieval." (Accepted by SIGIR 2022)☆17Updated 3 years ago
- ☆23Updated 3 years ago
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆120Updated 2 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆34Updated 2 years ago
- Vision-Language Pretraining & Efficient Transformer Papers.☆15Updated 3 years ago
- MixGen: A New Multi-Modal Data Augmentation☆126Updated 2 years ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆267Updated last year