ml-gde / jfluxLinks
JAX Implementation of Black Forest Labs' Flux.1 family of models
☆35Updated last week
Alternatives and similar repositories for jflux
Users that are interested in jflux are comparing it to the libraries listed below
Sorting:
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆131Updated last year
- LoRA for arbitrary JAX models and functions☆141Updated last year
- Train vision models using JAX and 🤗 transformers☆98Updated this week
- ☆87Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- ☆34Updated 11 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- This is a port of Mistral-7B model in JAX☆32Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- Focused on fast experimentation and simplicity☆76Updated 8 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- WIP☆94Updated last year
- ☆53Updated last year
- A simple library for scaling up JAX programs☆143Updated 9 months ago
- ☆115Updated this week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆87Updated last year
- minGPT in JAX☆48Updated 3 years ago
- 🧱 Modula software package☆225Updated last week
- Implementing the Denoising Diffusion Probabilistic Model in Flax☆149Updated 2 years ago
- Unofficial JAX implementations of deep learning research papers☆156Updated 3 years ago
- A simple, performant and scalable JAX-based world modeling codebase☆70Updated this week
- ☆61Updated 3 years ago
- seqax = sequence modeling + JAX☆166Updated last month
- JAX implementation of the Llama 2 model☆219Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 7 months ago
- ☆65Updated 9 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆115Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago