young-geng / tpu_pod_commanderLinks
TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.
☆21Updated last month
Alternatives and similar repositories for tpu_pod_commander
Users that are interested in tpu_pod_commander are comparing it to the libraries listed below
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 months ago
- If it quacks like a tensor...☆59Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- A simple library for scaling up JAX programs☆144Updated last week
- ☆119Updated 5 months ago
- ☆34Updated 11 months ago
- A simple, performant and scalable JAX-based world modeling codebase☆100Updated last week
- Implementation of PSGD optimizer in JAX☆35Updated 10 months ago
- ☆19Updated 2 years ago
- General Modules for JAX☆70Updated 2 months ago
- Machine Learning eXperiment Utilities☆46Updated 3 months ago
- Accelerated replay buffers in JAX☆44Updated 3 years ago
- Building blocks for productive research☆62Updated 3 months ago
- Train very large language models in Jax.☆209Updated 2 years ago
- JAX Synergistic Memory Inspector☆179Updated last year
- JAX implementation of VQVAE/VQGAN autoencoders (+FSQ)☆39Updated last year
- Jax/Flax rewrite of Karpathy's nanoGPT☆62Updated 2 years ago
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- flexible meta-learning in jax☆15Updated 2 years ago
- ☆53Updated last year
- Scaling scaling laws with board games.☆53Updated 2 years ago
- GPT implementation in Flax☆18Updated 3 years ago
- A collection of meta-learning algorithms in Jax☆23Updated 3 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆117Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- Tools and Utils for Experiments (TUX)☆15Updated 9 months ago
- JAX bindings for Flash Attention v2☆97Updated last week
- Atari-style POMDPs☆17Updated 3 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆37Updated last year
- ☆62Updated 3 years ago