fawazsammani / awesome-vision-language-pretraining
Awesome Vision-Language Pretraining Papers
☆30Updated 2 weeks ago
Alternatives and similar repositories for awesome-vision-language-pretraining:
Users that are interested in awesome-vision-language-pretraining are comparing it to the libraries listed below
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆97Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- ☆28Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆155Updated last year
- ☆61Updated last year
- Cross Modal Retrieval with Querybank Normalisation☆55Updated last year
- A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval☆42Updated 2 years ago
- A lightweight codebase for referring expression comprehension and segmentation☆52Updated 2 years ago
- ☆89Updated last year
- ☆65Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆119Updated 2 years ago
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆52Updated last year
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆66Updated 3 months ago
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆26Updated last year
- [ICCV2023] - CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation☆31Updated 3 months ago
- ☆37Updated 9 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆96Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆74Updated 9 months ago
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆67Updated 2 years ago
- Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022☆94Updated 2 years ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆63Updated 6 months ago
- SeqTR: A Simple yet Universal Network for Visual Grounding☆131Updated 3 months ago
- Implementation of our IJCAI2022 oral paper, ER-SAN: Enhanced-Adaptive Relation Self-Attention Network for Image Captioning.☆22Updated last year
- ☆34Updated last year
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆115Updated last week
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)☆40Updated last year
- ☆34Updated 2 years ago
- Toolkit for Elevater Benchmark☆69Updated last year
- [CVPR' 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 6 months ago
- [CVPR 2022] Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding☆145Updated 6 months ago