ml-gde / jfluxLinks
JAX Implementation of Black Forest Labs' Flux.1 family of models
β39Updated 2 weeks ago
Alternatives and similar repositories for jflux
Users that are interested in jflux are comparing it to the libraries listed below
Sorting:
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- Train vision models using JAX and π€ transformersβ100Updated last month
- LoRA for arbitrary JAX models and functionsβ143Updated last year
- β91Updated last year
- supporting pytorch FSDP for optimizersβ84Updated 11 months ago
- β53Updated last year
- Automatically take good care of your preemptible TPUsβ37Updated 2 years ago
- This is a port of Mistral-7B model in JAXβ32Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β47Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAXβ92Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"β103Updated 11 months ago
- β34Updated last year
- Experiment of using Tangent to autodiff tritonβ80Updated last year
- Implementation of the Llama architecture with RLHF + Q-learningβ168Updated 10 months ago
- FID computation in Jax/Flax.β29Updated last year
- Explorations into the recently proposed Taylor Series Linear Attentionβ100Updated last year
- A simple library for scaling up JAX programsβ144Updated last month
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β87Updated last year
- WIPβ93Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β125Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.β46Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorchβ25Updated 10 months ago
- β91Updated 3 years ago
- β62Updated 3 years ago
- DeMo: Decoupled Momentum Optimizationβ197Updated last year
- An implementation of the Llama architecture, to instruct and delightβ21Updated 6 months ago
- Latent Diffusion Language Modelsβ70Updated 2 years ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adamβ85Updated last year
- PyTorch interface for TrueGrad Optimizersβ43Updated 2 years ago
- β68Updated last year