Cadene / recipe1m.bootstrap.pytorchLinks
Retrieve recipes from foodie pictures using Deep Learning and Pytorch
☆57Updated 4 years ago
Alternatives and similar repositories for recipe1m.bootstrap.pytorch
Users that are interested in recipe1m.bootstrap.pytorch are comparing it to the libraries listed below
Sorting:
- Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images☆58Updated 6 years ago
- This repository contains an implementation of the models introduced in the paper Dialog-based Interactive Image Retrieval. The network is…☆70Updated 5 years ago
- Code and Resources for the Transformer Encoder Reasoning Network (TERN) - https://arxiv.org/abs/2004.09144☆58Updated last year
- [AAAI'20] Code release for "HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs".☆38Updated 2 years ago
- Tensorflow Implementation on Paper [CVPR2020]Image Search with Text Feedback by Visiolinguistic Attention Learning☆63Updated 5 years ago
- Official code for WACV 2021 paper - Compositional Learning of Image-Text Query for Image Retrieval☆56Updated 4 years ago
- Code for the model "Heterogeneous Graph Learning for Visual Commonsense Reasoning (NeurlPS 2019)"☆47Updated 5 years ago
- A paper list of visual semantic embeddings and text-image retrieval.☆41Updated 5 years ago
- Fashion 200K dataset used in paper "Automatic Spatially-aware Fashion Concept Discovery."☆67Updated 3 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆83Updated 3 years ago
- Connective Cognition Network for Directional Visual Commonsense Reasoning☆15Updated 4 years ago
- ☆33Updated 7 years ago
- This is the repo for Multi-level textual grounding☆34Updated 5 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 5 years ago
- ☆64Updated 3 years ago
- The offical code for paper "Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking", ACM Multimedia 2019 Oral☆68Updated 6 years ago
- Mixture-of-Embeddings-Experts☆120Updated 5 years ago
- Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval (CVPR 2019)☆134Updated last year
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆81Updated 5 years ago
- Dense video captioning in PyTorch☆41Updated 6 years ago
- Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images☆30Updated 6 years ago
- Contains code for the EMNLP paper `Learning Linguistic Attributes for Zero-Shot Verb Classification'☆26Updated 7 years ago
- Implementation of "MULE: Multimodal Universal Language Embedding"☆16Updated 5 years ago
- Pytorch implementation of the image-sentence embedding method described in "Unifying Visual-Semantic Embeddings with Multimodal Neural La…☆87Updated 8 years ago
- Language-Agnostic Visual-Semantic Embeddings (ICCV'19)☆22Updated 6 years ago
- Implementation for our paper "Conditional Image-Text Embedding Networks"☆39Updated 5 years ago
- NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering☆65Updated 4 years ago
- the source code of Multi-modal Circulant Fusion (MCF) for Temporal Activity Localization☆23Updated 6 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 4 years ago
- Code release for Park et al. Multimodal Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. in CVPR, 2018☆48Updated 7 years ago