borisdayma / clip-jaxLinks
Train vision models using JAX and π€ transformers
β99Updated 2 weeks ago
Alternatives and similar repositories for clip-jax
Users that are interested in clip-jax are comparing it to the libraries listed below
Sorting:
- β91Updated 2 years ago
- Automatically take good care of your preemptible TPUsβ36Updated 2 years ago
- A JAX implementation of the continuous time formulation of Consistency Modelsβ85Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ50Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- Latent Diffusion Language Modelsβ69Updated last year
- β34Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorchβ25Updated 7 months ago
- Utilities for PyTorch distributedβ25Updated 6 months ago
- JAX implementation ViT-VQGANβ83Updated 2 years ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β124Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β45Updated last year
- Focused on fast experimentation and simplicityβ75Updated 8 months ago
- JAX Implementation of Black Forest Labs' Flux.1 family of modelsβ36Updated last week
- Implementing the Denoising Diffusion Probabilistic Model in Flaxβ150Updated 2 years ago
- β87Updated last year
- LoRA for arbitrary JAX models and functionsβ142Updated last year
- FID computation in Jax/Flax.β28Updated last year
- β19Updated 3 months ago
- β53Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)β188Updated 3 years ago
- Contrastive Language-Image Pretrainingβ144Updated 3 years ago
- β61Updated 3 years ago
- PyTorch interface for TrueGrad Optimizersβ42Updated 2 years ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adamβ85Updated last year
- An implementation of the Llama architecture, to instruct and delightβ21Updated 3 months ago
- β31Updated 2 months ago
- JAX implementation of the Llama 2 modelβ219Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPTβ220Updated last year
- My explorations into editing the knowledge and memories of an attention networkβ35Updated 2 years ago