DRSY / MoTISLinks
[NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)
☆124Updated 2 years ago
Alternatives and similar repositories for MoTIS
Users that are interested in MoTIS are comparing it to the libraries listed below
Sorting:
- OpenAI CLIP coreML version for iOS text-image embeddings, image search, image clustering, image classifiy☆20Updated 2 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Utility to test the performance of CoreML models.☆70Updated 5 years ago
- Easily compute clip embeddings from video frames☆145Updated last year
- CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on A…☆82Updated last year
- ☆19Updated last year
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆221Updated last year
- Efficiently read embedding in streaming from any filesystem☆101Updated last year
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- Diffusion-based markup-to-image generation☆82Updated 2 years ago
- A simple web-server/api over a rclip-style clip embedding database.☆32Updated 2 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆243Updated last month
- ☆104Updated last year
- A repository containing datasets and tools to train a watermark classifier.☆71Updated 3 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago
- ☆141Updated 2 years ago
- The official PyTorch implementation for arXiv'23 paper 'LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer'☆100Updated 2 months ago
- Repository for the data in the paper "Explain Me the Painting: Multi-TopicKnowledgeable Art Description Generation".☆20Updated 3 years ago
- ☆65Updated last year
- ☆60Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- U-2-Net: U Square Net - Modified for paired image training of style transfer☆51Updated 3 years ago
- It is a simple library to speed up CLIP inference up to 3x (K80 GPU)☆220Updated 2 years ago
- ALIGN trained on COYO-dataset☆29Updated last year
- Load any clip model with a standardized interface☆21Updated last year
- Let's make a video clip☆96Updated 3 years ago
- Official implementation of "Active Image Indexing"☆59Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆222Updated 3 years ago
- Jupyter Notebooks for experimenting with negative prompting with Stable Diffusion 2.0.☆87Updated 2 years ago
- This repo provides scripts for converting tensorflow and pytorch models to coreml for variety of tasks. Converted models like efficientDe…☆39Updated 5 years ago