anujshah1003 / VQA-Demo-GUI
This repository gives a GUI using PyQt4 for VQA demo using Keras Deep Learning Library. The VQA model is created using Pre-trained VGG-16 Weight for image Features and glove vectors for question Features.
☆46Updated 3 years ago
Alternatives and similar repositories for VQA-Demo-GUI
Users that are interested in VQA-Demo-GUI are comparing it to the libraries listed below
Sorting:
- CNN+LSTM, Attention based, and MUTAN-based models for Visual Question Answering☆75Updated 5 years ago
- Image Captioning based on Bottom-Up and Top-Down Attention model☆102Updated 6 years ago
- BERT + Image Captioning☆132Updated 4 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind P…☆60Updated 6 years ago
- Image Captioning with Keras☆63Updated 4 years ago
- Image Caption using keras, VGG16 pretrained model, CNN and RNN☆44Updated 5 years ago
- VQA - Visual Question Answering☆14Updated 8 years ago
- generate captions for images using a CNN-RNN model that is trained on the Microsoft Common Objects in COntext (MS COCO) dataset☆79Updated 6 years ago
- Unofficial tensorflow implementation of "Bottom-up and Top-down attention for VQA" (TF v. 1.13)☆39Updated 5 years ago
- Pytorch implement Show, Attend and Tell: Neural Image Caption Generation with Visual Attention☆94Updated 6 years ago
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆96Updated last year
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆245Updated 2 years ago
- PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning☆84Updated 4 years ago
- Chinese image to caption, based on VGG + LSTM + ATTENTION☆10Updated 6 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset …☆15Updated last year
- Novel Object Captioner - Captioning Images with diverse objects☆41Updated 7 years ago
- Starter code in PyTorch for the Visual Dialog challenge☆191Updated 2 years ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆80Updated 4 years ago
- ☆38Updated 6 years ago
- Image Captioning: Implementing the Neural Image Caption Generator☆21Updated 4 years ago
- [Python 3] Tensorflow implementation of "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"☆65Updated 6 years ago
- Implemented Image Captioning Model using both Local and Global Attention Techniques and API'fied the model using FLASK☆25Updated 4 years ago
- A simple Flask app to generate answer given an image and a natural language question about the image. The app uses a deep learning model,…☆12Updated 2 years ago
- Show and Tell : A Neural Image Caption Generator☆107Updated 5 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 after…☆61Updated 6 years ago
- vqa drived by bottom-up and top-down attention and knowledge☆14Updated 6 years ago
- Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning☆107Updated 7 years ago
- Code of Dense Relational Captioning☆69Updated 2 years ago