borisdayma / vit-vqgan
JAX implementation ViT-VQGAN
☆56Updated 2 years ago
Alternatives and similar repositories for vit-vqgan:
Users that are interested in vit-vqgan are comparing it to the libraries listed below
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆184Updated 2 years ago
- ☆58Updated 2 years ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- Learned Hyperparameter Optimizers☆58Updated 3 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆125Updated 2 years ago
- Train very large language models in Jax.☆198Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆53Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆47Updated last year
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆84Updated 3 years ago
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆127Updated last year
- ☆30Updated this week
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 3 years ago
- Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'☆38Updated 3 years ago
- JAX implementation ViT-VQGAN☆80Updated 2 years ago
- Texture mapping with variational auto-encoders☆40Updated 3 years ago
- Train vision models using JAX and 🤗 transformers☆97Updated this week
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- a lightweight transformer library for PyTorch☆72Updated 3 years ago
- Python Research Framework☆106Updated 2 years ago
- ☆130Updated last year
- ☆181Updated last week
- Amos optimizer with JEstimator lib.☆81Updated 8 months ago
- ☆27Updated 3 years ago
- ☆38Updated 2 years ago
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆106Updated 2 years ago