young-geng / tpu_pod_commanderLinks
TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.
☆21Updated this week
Alternatives and similar repositories for tpu_pod_commander
Users that are interested in tpu_pod_commander are comparing it to the libraries listed below
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Updated 3 weeks ago
- If it quacks like a tensor...☆59Updated 10 months ago
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Accelerated replay buffers in JAX☆43Updated 3 years ago
- ☆34Updated 10 months ago
- ☆19Updated 2 years ago
- Building blocks for productive research☆61Updated last month
- ☆120Updated 3 months ago
- A simple, performant and scalable JAX-based world modeling codebase☆75Updated this week
- General Modules for JAX☆67Updated 2 weeks ago
- flexible meta-learning in jax☆14Updated last year
- Implementation of PSGD optimizer in JAX☆34Updated 8 months ago
- A collection of meta-learning algorithms in Jax☆23Updated 3 years ago
- Tools and Utils for Experiments (TUX)☆15Updated 8 months ago
- JAX implementation of VQVAE/VQGAN autoencoders (+FSQ)☆36Updated last year
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- ☆52Updated last year
- Jax/Flax rewrite of Karpathy's nanoGPT☆60Updated 2 years ago
- Scaling scaling laws with board games.☆53Updated 2 years ago
- Code for Powderworld: A Platform for Understanding Generalization via Rich Task Distributions☆68Updated last year
- Train very large language models in Jax.☆209Updated last year
- ☆13Updated last year
- ☆27Updated this week
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆114Updated last year
- A reinforcement learning environment for the IGLU 2022 at NeurIPS☆34Updated 2 years ago
- 🪐 The Sebulba architecture to scale reinforcement learning on Cloud TPUs in JAX☆59Updated last year
- This repo is built to facilitate the training and analysis of autoregressive transformers on maze-solving tasks.☆31Updated last year
- JAX implementation of the Mistral 7b v0.1 model☆13Updated last year
- GPT implementation in Flax☆18Updated 3 years ago