google-deepmind / nanodo
☆176Updated 2 months ago
Related projects: ⓘ
- seqax = sequence modeling + JAX☆129Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆492Updated this week
- A simple library for scaling up JAX programs☆116Updated last month
- Scalable neural net training via automatic normalization in the modular norm.☆108Updated 3 weeks ago
- ☆201Updated 2 months ago
- LoRA for arbitrary JAX models and functions☆127Updated 6 months ago
- A Jax-based library for designing and training transformer models from scratch.☆271Updated 3 weeks ago
- Cost aware hyperparameter tuning algorithm☆95Updated 2 months ago
- JAX Synergistic Memory Inspector☆161Updated 2 months ago
- Named Tensors for Legible Deep Learning in JAX☆146Updated this week
- A MAD laboratory to improve AI architecture designs 🧪☆84Updated 4 months ago
- Accelerated First Order Parallel Associative Scan☆151Updated 3 weeks ago
- ☆253Updated this week
- ☆202Updated 4 months ago
- Run PyTorch in JAX. 🤝☆187Updated 10 months ago
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- JAX-Toolbox☆231Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆245Updated 3 months ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆283Updated this week
- Jax/Flax rewrite of Karpathy's nanoGPT☆46Updated last year
- ☆325Updated 11 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆110Updated 5 months ago
- ☆238Updated 2 weeks ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆314Updated last month
- JAX implementation of the Llama 2 model☆205Updated 7 months ago
- Inference code for LLaMA models in JAX☆108Updated 3 months ago
- JMP is a Mixed Precision library for JAX.☆183Updated 3 months ago
- Puzzles for exploring transformers☆293Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆321Updated 2 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆94Updated 2 weeks ago