sjy0727 / CLIP-Text-Image-Retrieval
该项目旨在通过输入文本描述来检索与之相匹配的图片。
☆26Updated last year
Related projects ⓘ
Alternatives and complementary repositories for CLIP-Text-Image-Retrieval
- ☆41Updated last year
- USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text Retrieval, TIP 2024☆21Updated 8 months ago
- Efficient Token-Guided Image-Text Retrieval with Consistent Multimodal Contrastive Training☆23Updated last year
- Implementation of our paper, 'Unifying Two-Stream Encoders with Transformers for Cross-Modal Retrieval.'☆20Updated 11 months ago
- ☆12Updated 6 months ago
- Summary of Related Research on Image-Text Matching☆67Updated last year
- 计算机视觉课程设计-基于Chinese-CLIP的图文检索系统☆47Updated last year
- [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”☆29Updated 7 months ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆111Updated last year
- Official Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval…☆41Updated last year
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆26Updated 7 months ago
- Implementation of our AAAI2022 paper, Show Your Faith: Cross-Modal Confidence-Aware Network for Image-Text Matching.☆36Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆48Updated 4 months ago
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated last year
- This project summarizes the CLIP-based cross-modal hashing methods. Including DCMHT, MITH, DSPH, DNPH, TwDH (Two-Step Discrete Hashing fo…☆18Updated 7 months ago
- ☆23Updated last year
- Paper reading notes in the field of Image-Text Matching/Retrieval.☆14Updated 2 years ago
- 基于多模态检索的互联网图文匹配☆10Updated 8 months ago
- ☆17Updated 7 months ago
- ☆43Updated 2 years ago
- The code of "Image-text Retrieval via Preserving Main Semantic of Vision" in ICME 2023.☆13Updated 11 months ago
- Source codes of the paper "When CLIP meets Cross-modal Hashing Retrieval: A New Strong Baseline"☆24Updated 8 months ago
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆24Updated last year
- Cross-Modal-Real-valuded-Retrieval☆76Updated last year
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆119Updated 2 weeks ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆114Updated 5 months ago
- 基于ClipCap的看图说话Image Caption模型☆285Updated 2 years ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆24Updated 4 months ago
- 中文CLIP:自定义数据集,可根据文图提取向量,实现文图匹配。☆21Updated 2 years ago
- ☆29Updated 7 months ago