sjy0727 / CLIP-Text-Image-Retrieval
该项目旨在通过输入文本描述来检索与之相匹配的图片。
☆40Updated last year
Alternatives and similar repositories for CLIP-Text-Image-Retrieval:
Users that are interested in CLIP-Text-Image-Retrieval are comparing it to the libraries listed below
- 计算机视觉课程设计-基于Chinese-CLIP的图文检索系统☆63Updated last year
- ☆47Updated last year
- Efficient Token-Guided Image-Text Retrieval with Consistent Multimodal Contrastive Training☆27Updated last year
- Summary of Related Research on Image-Text Matching☆70Updated last year
- USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text Retrieval, TIP 2024☆31Updated last year
- Implementation of our paper, 'Unifying Two-Stream Encoders with Transformers for Cross-Modal Retrieval.'☆24Updated last year
- ☆13Updated last year
- 毕业设计:《基于CLIP模型的视频文本检索设计与实现》☆11Updated 9 months ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆115Updated last year
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆41Updated last year
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆126Updated 6 months ago
- 基于多模态检索的互联网图文匹配☆14Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆143Updated 9 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 10 months ago
- Cross-Modal-Real-valuded-Retrieval☆81Updated last year
- Official Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval…☆41Updated last year
- ☆27Updated last year
- Implementation of our AAAI2022 paper, Show Your Faith: Cross-Modal Confidence-Aware Network for Image-Text Matching.☆36Updated last year
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆66Updated last year
- The code of "Image-text Retrieval via Preserving Main Semantic of Vision" in ICME 2023.☆14Updated last year
- [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”☆33Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆86Updated last year
- Source codes of the paper "When CLIP meets Cross-modal Hashing Retrieval: A New Strong Baseline"☆29Updated last year
- Noise of Web (NoW) is a challenging noisy correspondence learning (NCL) benchmark containing 100K image-text pairs for robust image-text …☆12Updated 5 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆35Updated 9 months ago
- Multimodal-Composite-Editing-and-Retrieval-update☆32Updated 6 months ago
- The official implementation for BLIP4CIR with bi-directional training | Bi-directional Training for Composed Image Retrieval via Text Pro…☆30Updated last year
- This repo is for the implementation of Enhancing Image-Text Matching with Adaptive Feature Aggregation, ICASSP 2024☆9Updated 10 months ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆137Updated 10 months ago
- AMC: Adaptive Multi-expert Collaborative Network for Text-guided Image Retrieval☆19Updated 8 months ago