DRSY / MoTISLinks
[NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)
☆127Updated 2 years ago
Alternatives and similar repositories for MoTIS
Users that are interested in MoTIS are comparing it to the libraries listed below
Sorting:
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆98Updated 2 years ago
- A repository containing datasets and tools to train a watermark classifier.☆74Updated 3 years ago
- Utility to test the performance of CoreML models.☆70Updated 5 years ago
- Easily compute clip embeddings from video frames☆147Updated 2 years ago
- ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.☆86Updated 2 years ago
- CLIP-Finder enables semantic offline searches of images from gallery photos using natural language descriptions or the camera. Built on A…☆89Updated last year
- ☆62Updated 2 months ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 6 months ago
- The official PyTorch implementation for arXiv'23 paper 'LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer'☆102Updated 7 months ago
- ☆87Updated last year
- Efficiently read embedding in streaming from any filesystem☆104Updated 4 months ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆223Updated last year
- ☆65Updated 2 years ago
- Diffusion-based markup-to-image generation☆83Updated 2 years ago
- Big-Interleaved-Dataset☆58Updated 2 years ago
- ☆23Updated last year
- ☆103Updated last year
- CLIP中文encoder☆22Updated 3 years ago
- ☆18Updated 2 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Let's make a video clip☆96Updated 3 years ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 3 years ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆211Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆168Updated 2 years ago
- It is a simple library to speed up CLIP inference up to 3x (K80 GPU)☆230Updated 2 years ago
- ☆141Updated 3 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- 1st Place Solution in Google Universal Image Embedding☆67Updated 2 years ago
- Official implementation of "Active Image Indexing"☆60Updated 2 years ago