openai / CLIPLinks
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
☆31,236Updated last year
Alternatives and similar repositories for CLIP
Users that are interested in CLIP are comparing it to the libraries listed below
Sorting:
- An open source implementation of CLIP.☆12,825Updated last month
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,553Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,975Updated 11 months ago
- ☆11,906Updated 7 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆11,780Updated 2 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,787Updated 3 months ago
- End-to-End Object Detection with Transformers☆14,798Updated last year
- Taming Transformers for High-Resolution Image Synthesis☆6,331Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,853Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,231Updated last week
- 🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.☆31,383Updated this week
- Fast and memory-efficient exact attention☆20,151Updated this week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆151,652Updated this week
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆24,244Updated this week
- High-Resolution Image Synthesis with Latent Diffusion Models☆13,456Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆47,749Updated 10 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,027Updated last week
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,191Updated last week
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆15,331Updated last year
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆35,549Updated last week
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,253Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,843Updated 10 months ago
- PyTorch implementation of MAE https//arxiv.org/abs/2111.06377