rom1504 / CLIP
Contrastive Language-Image Pretraining
☆38Updated 8 months ago
Alternatives and similar repositories for CLIP:
Users that are interested in CLIP are comparing it to the libraries listed below
- ☆26Updated 2 years ago
- Utilities for PyTorch distributed☆23Updated 3 weeks ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆100Updated last year
- An open source implementation of CLIP.☆32Updated 2 years ago
- A JAX nn library☆21Updated 3 weeks ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Load any clip model with a standardized interface☆21Updated 10 months ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆17Updated 4 months ago
- A JAX implementation of the continuous time formulation of Consistency Models☆84Updated last year
- A dashboard for exploring timm learning rate schedulers☆19Updated 4 months ago
- CLOOB training (JAX) and inference (JAX and PyTorch)☆70Updated 2 years ago
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆31Updated last year
- ☆28Updated 3 years ago
- ☆29Updated 2 years ago
- JAX implementation ViT-VQGAN☆82Updated 2 years ago
- Implementation of LogAvgExp for Pytorch☆34Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- Latent Diffusion Language Models☆68Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 2 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆31Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated last year
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- Implementation of Metaformer, but in an autoregressive manner☆23Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆45Updated last month
- Describe the format of image/text datasets☆11Updated 2 years ago
- ☆21Updated 2 years ago
- ☆157Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago