DRSY / MoTISLinks
[NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)
☆126Updated 2 years ago
Alternatives and similar repositories for MoTIS
Users that are interested in MoTIS are comparing it to the libraries listed below
Sorting:
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Utility to test the performance of CoreML models.☆70Updated 5 years ago
- CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on A…☆84Updated last year
- ☆103Updated last year
- It is a simple library to speed up CLIP inference up to 3x (K80 GPU)☆223Updated 2 years ago
- ☆59Updated last year
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 3 months ago
- ☆65Updated 2 years ago
- CLIP中文encoder☆22Updated 3 years ago
- A repository containing datasets and tools to train a watermark classifier.☆71Updated 3 years ago
- Efficiently read embedding in streaming from any filesystem☆101Updated last month
- The official PyTorch implementation for arXiv'23 paper 'LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer'☆100Updated 4 months ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆168Updated 2 years ago
- Easily compute clip embeddings from video frames☆146Updated last year
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆222Updated last year
- ☆87Updated last year
- ☆18Updated 2 years ago
- U-2-Net: U Square Net - Modified for paired image training of style transfer☆51Updated 3 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆85Updated 2 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- Let's make a video clip☆95Updated 3 years ago
- ☆140Updated 2 years ago
- Jupyter Notebooks for experimenting with negative prompting with Stable Diffusion 2.0.☆87Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Repository for the data in the paper "Explain Me the Painting: Multi-TopicKnowledgeable Art Description Generation".☆20Updated 4 years ago
- ☆23Updated last year
- ☆112Updated 4 years ago
- Release of ImageNet-Captions☆51Updated 2 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year