data2ml / all-clipLinks
Load any clip model with a standardized interface
☆22Updated last week
Alternatives and similar repositories for all-clip
Users that are interested in all-clip are comparing it to the libraries listed below
Sorting:
- A dashboard for exploring timm learning rate schedulers☆19Updated 9 months ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated last year
- Implementation of a holodeck, written in Pytorch☆18Updated last year
- JAX implementation ViT-VQGAN☆83Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Contrastive Language-Image Pretraining☆38Updated last year
- Aggregating embeddings over time☆32Updated 2 years ago
- ☆27Updated 4 years ago
- Un-*** 50 billions multimodality dataset☆23Updated 3 years ago
- CLOOB training (JAX) and inference (JAX and PyTorch)☆72Updated 3 years ago
- ☆59Updated last year
- Simple script to re-rank images using OpenAI's CLIP https://github.com/openai/CLIP.☆16Updated 4 years ago
- Latent Diffusion Language Models☆69Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated last week
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- Utilities for PyTorch distributed☆25Updated 6 months ago
- Efficiently read embedding in streaming from any filesystem☆102Updated last month
- Train vision models using JAX and 🤗 transformers☆99Updated 2 weeks ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- ☆23Updated 9 months ago
- ☆39Updated last year
- recipe for training fully-featured self supervised image jepa models☆10Updated 3 months ago
- Utilities for Training Very Large Models☆58Updated 11 months ago
- DALLE-tools provided useful dataset utilities to improve you workflow with WebDatasets.☆15Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago