microsoft / GenerativeImage2Text
GIT: A Generative Image-to-text Transformer for Vision and Language
☆563Updated last year
Alternatives and similar repositories for GenerativeImage2Text:
Users that are interested in GenerativeImage2Text are comparing it to the libraries listed below
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆651Updated 2 years ago
- CLIP-like model evaluation☆696Updated 3 weeks ago
- Official Repository of ChatCaptioner☆465Updated 2 years ago
- Multi-modality pre-training☆491Updated 11 months ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (https://arxiv.org/abs/2212.00280)☆320Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆274Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆690Updated 3 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,238Updated 2 years ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆452Updated last year
- DataComp: In search of the next generation of multimodal datasets☆700Updated last year
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆396Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,191Updated 9 months ago
- Large-scale text-video dataset. 10 million captioned short videos.☆629Updated 8 months ago
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆806Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,131Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆478Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆420Updated 2 years ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆866Updated 5 months ago
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆241Updated 2 years ago
- Robust fine-tuning of zero-shot models☆695Updated 2 years ago
- ☆777Updated 9 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆266Updated 10 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆707Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆935Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆757Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆292Updated last year
- Language Models Can See: Plugging Visual Controls in Text Generation☆256Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,498Updated last year