ml-gde / jfluxLinks
JAX Implementation of Black Forest Labs' Flux.1 family of models
β40Updated 2 months ago
Alternatives and similar repositories for jflux
Users that are interested in jflux are comparing it to the libraries listed below
Sorting:
- Train vision models using JAX and π€ transformersβ100Updated last month
- β92Updated last year
- Focused on fast experimentation and simplicityβ80Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- Automatically take good care of your preemptible TPUsβ37Updated 2 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β47Updated last year
- Explorations into the recently proposed Taylor Series Linear Attentionβ100Updated last year
- LoRA for arbitrary JAX models and functionsβ144Updated last year
- DeMo: Decoupled Momentum Optimizationβ198Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"β103Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAXβ92Updated 2 years ago
- supporting pytorch FSDP for optimizersβ84Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β126Updated last year
- β53Updated 2 years ago
- β34Updated last year
- Implementation of the Llama architecture with RLHF + Q-learningβ170Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorchβ25Updated last year
- This is a port of Mistral-7B model in JAXβ33Updated last year
- FID computation in Jax/Flax.β29Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β186Updated 2 weeks ago
- β304Updated this week
- Collection of autoregressive model implementationβ85Updated 3 weeks ago
- JAX implementation of the Llama 2 modelβ216Updated 2 years ago
- A simple library for scaling up JAX programsβ144Updated 3 months ago
- Experiment of using Tangent to autodiff tritonβ82Updated 2 years ago
- Maximal Update Parametrization (ΞΌP) with Flax & Optax.β16Updated 2 years ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β86Updated 2 years ago
- Implementation of Diffusion Transformers and Rectified Flow in Jaxβ27Updated last year
- Implementing the Denoising Diffusion Probabilistic Model in Flaxβ156Updated 3 years ago
- An implementation of the Llama architecture, to instruct and delightβ21Updated 8 months ago