zjukg / Structure-CLIPLinks
[Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations
☆148Updated last year
Alternatives and similar repositories for Structure-CLIP
Users that are interested in Structure-CLIP are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆55Updated last year
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆122Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆50Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 8 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆87Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆46Updated last year
- A comprehensive survey of Composed Multi-modal Retrieval (CMR), including Composed Image Retrieval (CIR) and Composed Video Retrieval (CV…☆56Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆50Updated 2 months ago
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆43Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆135Updated last year
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆48Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆49Updated last month
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆84Updated last year
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated 10 months ago
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆129Updated 10 months ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆131Updated last month
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆71Updated last year
- Official Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval…☆40Updated last year
- ☆49Updated 2 years ago
- code for studying OpenAI's CLIP explainability☆34Updated 3 years ago
- Multimodal-Composite-Editing-and-Retrieval-update☆33Updated 11 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆102Updated last year
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆39Updated 3 weeks ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated last month
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago
- ☆35Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated 2 years ago
- [ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"☆67Updated 3 years ago