lzcemma / LeMDA
Code Example for Learning Multimodal Data Augmentation in Feature Space
☆43Updated 2 years ago
Alternatives and similar repositories for LeMDA
Users that are interested in LeMDA are comparing it to the libraries listed below
Sorting:
- ☆26Updated 3 years ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 4 months ago
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆63Updated 3 years ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 11 months ago
- Multi-label Image Recognition with Partial Labels (IJCV'24, ESWA'24, AAAI'22)☆39Updated 10 months ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"☆33Updated 2 years ago
- MixGen: A New Multi-Modal Data Augmentation☆122Updated 2 years ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆39Updated last year
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆24Updated last year
- Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)☆58Updated 11 months ago
- CVPR 2022, Robust Contrastive Learning against Noisy Views☆83Updated 3 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆106Updated 2 years ago
- Official Code for ICML 2023 Paper: On the Generalization of Multi-modal Contrastive Learning☆25Updated last year
- [ACL 2021] Learning Relation Alignment for Calibrated Cross-modal Retrieval☆30Updated 2 years ago
- Towards Unified and Effective Domain Generalization☆31Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆116Updated 3 years ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- TupleInfoNCE ICCV21☆16Updated 2 years ago
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated last year
- ☆45Updated last year
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- The code for the paper "Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval" (WWW'22, Oral).☆18Updated 3 years ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval☆42Updated 3 years ago
- PyTorch implementation of the paper "SuperLoss: A Generic Loss for Robust Curriculum Learning" in NIPS 2020.☆29Updated 4 years ago
- Large Loss Matters in Weakly Supervised Multi-Label Classification - CVPR2022☆46Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆30Updated 3 weeks ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 9 months ago