openai / CLIPLinks
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
☆31,564Updated last year
Alternatives and similar repositories for CLIP
Users that are interested in CLIP are comparing it to the libraries listed below
Sorting:
- An open source implementation of CLIP.☆12,963Updated 2 weeks ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,020Updated last year
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,569Updated last year
- 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.☆31,617Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,289Updated this week
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,295Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,050Updated this week
- Fast and memory-efficient exact attention☆20,541Updated this week
- PyTorch code and models for the DINOv2 self-supervised learning method.☆11,882Updated 3 months ago
- High-Resolution Image Synthesis with Latent Diffusion Models☆13,539Updated last year
- Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch☆11,335Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,945Updated 11 months ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,211Updated last month
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆5,630Updated last year
- Taming Transformers for High-Resolution Image Synthesis☆6,343Updated last year
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆152,590Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,094Updated this week
- ☆7,145Updated last year
- PyTorch implementation of MAE https//arxiv.org/abs/2111.06377☆8,089Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,957Updated last year
- ImageBind One Embedding Space to Bind Them All☆8,859Updated last month
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,136Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,670Updated last week
- ☆12,005Updated 8 months ago
- Implementation of Denoising Diffusion Probabilistic Model in Pytorch☆10,169Updated 3 months ago
- A playbook for systematically maximizing the performance of deep learning models.☆29,395Updated last year
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,430Updated last week
- End-to-End Object Detection with Transformers☆14,866Updated last year
- Ongoing research training transformer models at scale☆14,225Updated this week
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,276Updated last year