google-research-datasets / conceptual-12mLinks
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.
☆397Updated last month
Alternatives and similar repositories for conceptual-12m
Users that are interested in conceptual-12m are comparing it to the libraries listed below
Sorting:
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆278Updated 2 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆778Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆666Updated 2 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess them☆222Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆712Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆316Updated last year
- Multi-modality pre-training☆502Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆542Updated 4 years ago
- Reliably download millions of images efficiently☆117Updated 4 years ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆244Updated 2 months ago
- MERLOT: Multimodal Neural Script Knowledge Models☆224Updated 3 years ago
- Generate text captions for images from their embeddings.☆114Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆415Updated 2 years ago
- Large-scale text-video dataset. 10 million captioned short videos.☆654Updated last year
- ☆228Updated last year
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆370Updated 3 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆172Updated 3 years ago
- Language Models Can See: Plugging Visual Controls in Text Generation☆258Updated 3 years ago
- DataComp: In search of the next generation of multimodal datasets☆734Updated 3 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- CLIPScore EMNLP code☆237Updated 2 years ago
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆182Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆174Updated 2 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆708Updated 3 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆141Updated 2 months ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆198Updated last year
- CLIP-like model evaluation☆759Updated 2 weeks ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆138Updated 2 years ago