fawazsammani / awesome-vision-language-pretraining
Awesome Vision-Language Pretraining Papers
☆30Updated 4 months ago
Alternatives and similar repositories for awesome-vision-language-pretraining
Users that are interested in awesome-vision-language-pretraining are comparing it to the libraries listed below
Sorting:
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆98Updated last year
- ☆30Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆99Updated last year
- A comprehensive survey of Composed Multi-modal Retrieval (CMR), including Composed Image Retrieval (CIR) and Composed Video Retrieval (CV…☆35Updated 2 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆49Updated last month
- ☆92Updated last year
- A lightweight codebase for referring expression comprehension and segmentation☆54Updated 2 years ago
- Toolkit for Elevater Benchmark☆72Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆163Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆83Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆116Updated 3 years ago
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- ☆83Updated 3 years ago
- ☆61Updated last year
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆52Updated last year
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆68Updated 7 months ago
- ☆36Updated 2 years ago
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆25Updated 2 years ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 10 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Cross Modal Retrieval with Querybank Normalisation☆55Updated last year
- Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022☆97Updated 2 years ago
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆74Updated last year
- SeqTR: A Simple yet Universal Network for Visual Grounding☆135Updated 6 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆78Updated 2 months ago
- ☆59Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆49Updated last year
- Context-I2W: Mapping Images to Context-dependent words for Accurate Zero-Shot Composed Image Retrieval [AAAI 2024 Oral]☆51Updated 5 months ago