salesforce / BLIPLinks
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
☆5,604Updated last year
Alternatives and similar repositories for BLIP
Users that are interested in BLIP are comparing it to the libraries listed below
Sorting:
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,065Updated last year
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,546Updated last year
- An open source implementation of CLIP.☆13,113Updated last month
- An open-source framework for training large multimodal models.☆4,050Updated last year
- Grounded Language-Image Pre-training☆2,553Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,620Updated last year
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,237Updated last month
- Easily compute clip embeddings and build a clip retrieval system with them☆2,705Updated 4 months ago
- Code for ALBEF: a new vision-language pre-training method☆1,738Updated 3 years ago
- Simple image captioning model☆1,404Updated last year
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,103Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,182Updated last year
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,423Updated last year
- Open-source and strong foundation image recognition models.☆3,519Updated 9 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,134Updated last year
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,188Updated 2 years ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆31,959Updated last year
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,487Updated last year
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,537Updated 8 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,754Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,585Updated last year
- ☆3,421Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,271Updated 6 months ago
- This repository contains the code of the CVPR 2022 paper "Image Segmentation Using Text and Image Prompts".☆1,300Updated last year
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,510Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,062Updated last year
- Scenic: A Jax Library for Computer Vision Research and Beyond☆3,731Updated last week
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,770Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,273Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,229Updated last year