packquickly / schedule_free_optx
Schedule free optimiser implemented in JAX using Optimistix
☆14Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for schedule_free_optx
- Einsum-like high-level array sharding API for JAX☆32Updated 4 months ago
- If it quacks like a tensor...☆52Updated last week
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 2 weeks ago
- A simple library for scaling up JAX programs☆127Updated 2 weeks ago
- ☆28Updated 7 months ago
- LoRA for arbitrary JAX models and functions☆133Updated 8 months ago
- Pytorch-like dataloaders in JAX.☆59Updated last month
- ☆36Updated 10 months ago
- Scalable neural net training via automatic normalization in the modular norm.☆121Updated 3 months ago
- JAX Arrays for human consumption☆88Updated last year
- Run PyTorch in JAX. 🤝☆200Updated last year
- OpTree: Optimized PyTree Utilities☆152Updated this week
- Generative cellular automaton-like learning environments for RL.☆19Updated last month
- Efficient optimizers☆80Updated this week
- ☆29Updated 2 months ago
- ☆18Updated last month
- Multiple dispatch over abstract array types in JAX.☆105Updated last week
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Named Tensors for Legible Deep Learning in JAX☆153Updated this week
- ☆53Updated 10 months ago
- Turn jitted jax functions back into python source code☆20Updated 4 months ago
- Flow-matching algorithms in JAX☆77Updated 3 months ago
- ☆27Updated 4 months ago
- Implementation of DreamerV3 in Pytorch☆33Updated this week
- ☆40Updated 4 months ago
- ☆77Updated this week
- ☆58Updated 2 years ago
- Diffusion models in PyTorch☆87Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- JAX implementation of the Mistral 7b v0.2 model☆33Updated 4 months ago