leolee99 / PAULinks
The official implementation of paper "Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval" accepted by NeurIPS' 2023.
☆25Updated last year
Alternatives and similar repositories for PAU
Users that are interested in PAU are comparing it to the libraries listed below
Sorting:
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆55Updated 11 months ago
- ☆27Updated 2 years ago
- Deep Evidential Learning with Noisy Correspondence for Cross-modal Retrieval ( ACM Multimedia 2022, Pytorch Code)☆44Updated last year
- ☆37Updated 2 years ago
- The code of the paper "Negative Pre-aware for Noisy Cross-modal Matching" in AAAI 2024.☆22Updated last month
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 6 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆85Updated last year
- Context-I2W: Mapping Images to Context-dependent words for Accurate Zero-Shot Composed Image Retrieval [AAAI 2024 Oral]☆55Updated 3 months ago
- ☆74Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆166Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆170Updated last year
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆40Updated last year
- [AAAI 2024] GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval☆18Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆60Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆123Updated 2 years ago
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆49Updated last year
- [SIGIR 2024] - Simple but Effective Raw-Data Level Multimodal Fusion for Composed Image Retrieval☆42Updated last year
- ☆13Updated 8 months ago
- [CVPR 2023 Highlight & TPAMI] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning☆121Updated 8 months ago
- ☆20Updated last year
- Implementation of "DIME-FM: DIstilling Multimodal and Efficient Foundation Models"☆15Updated last year
- ☆100Updated last year
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆106Updated last year
- ☆25Updated last year
- ☆94Updated last year
- PyTorch implementation for Cross-modal Retrieval with Noisy Correspondence via Consistency Refining and Mining (TIP 2024)☆17Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆50Updated 4 months ago
- This is a summary of research on noisy correspondence. There may be omissions. If anything is missing please get in touch with us. Our em…☆69Updated last month
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆104Updated 2 years ago
- ☆30Updated 2 years ago