conceptofmind / vit-flaxLinks
Implementation of numerous Vision Transformers in Google's JAX and Flax.
☆22Updated 3 years ago
Alternatives and similar repositories for vit-flax
Users that are interested in vit-flax are comparing it to the libraries listed below
Sorting:
- FID computation in Jax/Flax.☆29Updated last year
- Implementing the Denoising Diffusion Probabilistic Model in Flax☆151Updated 3 years ago
- Contrastive Language-Image Pretraining☆143Updated 3 years ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆39Updated 2 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆123Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- ☆34Updated 11 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 3 years ago
- Easy Hypernetworks in Pytorch and Jax☆105Updated 2 years ago
- Train vision models using JAX and 🤗 transformers☆99Updated this week
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆115Updated 3 years ago
- ☆75Updated 2 years ago
- This is a port of Mistral-7B model in JAX☆32Updated last year
- DiT (training + flow matching) in Jax☆10Updated 10 months ago
- PyTorch interface for TrueGrad Optimizers☆43Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- ☆116Updated this week
- ☆62Updated 3 years ago
- Unofficial JAX implementations of deep learning research papers☆159Updated 3 years ago
- Implementation of Flash Attention in Jax☆220Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 9 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆45Updated 2 years ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- Lightning-like training API for JAX with Flax☆44Updated 11 months ago
- A simple library for scaling up JAX programs☆144Updated this week
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆112Updated 2 months ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago