mshukor / TFood
[CVPRW22] Official Implementation of T-Food: "Transformer Decoders with MultiModal Regularization for Cross-Modal Food Retrieval". Accepted at CVPR22 's MULA Workshop.
☆29Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for TFood
- CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021☆56Updated 2 years ago
- Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021 (Oral)☆156Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆115Updated last year
- A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval☆42Updated 2 years ago
- Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)☆124Updated 8 months ago
- SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION☆34Updated 2 years ago
- Code and Resources for the Transformer Encoder Reasoning Network (TERN) - https://arxiv.org/abs/2004.09144☆57Updated 11 months ago
- ☆23Updated 2 years ago
- https://layer6ai-labs.github.io/xpool/☆114Updated last year
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated last year
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆214Updated 7 months ago
- Implementation of our CVPR2020 paper, Graph Structured Network for Image-Text Matching☆164Updated 4 years ago
- Adaptive Cross-Modal Embeddings for Image-Sentence Alignment☆34Updated last year
- Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback presented in CVPR 2021.☆64Updated 2 years ago
- Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning☆83Updated 3 years ago
- ☆43Updated 2 years ago
- ☆15Updated 2 years ago
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆86Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Source code of Universal Weighting Metric Learning for Cross-Modal Matching. The paper is accepted by CVPR2020.☆22Updated 2 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval". CVPR 2022☆94Updated 2 years ago
- Code for journal paper "Learning Dual Semantic Relations with Graph Attention for Image-Text Matching", TCSVT, 2020.☆71Updated 2 years ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆109Updated last year
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆66Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆202Updated last year
- The offical code for paper "Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking", ACM Multimedia 2019 Oral☆66Updated 5 years ago
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 2 years ago
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval. Also, a text-video retrieval toolbox based on CLIP + fast p…☆125Updated 2 years ago
- Official Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval…☆41Updated last year
- ☆62Updated last year