ayaka14732 / tpu-starter
Everything you want to know about Google Cloud TPU
☆524Updated 9 months ago
Alternatives and similar repositories for tpu-starter:
Users that are interested in tpu-starter are comparing it to the libraries listed below
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆568Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- Puzzles for exploring transformers☆343Updated last year
- What would you do with 1000 H100s...☆1,038Updated last year
- ☆428Updated 6 months ago
- JAX Synergistic Memory Inspector☆172Updated 9 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆489Updated last week
- For optimization algorithm research and development.☆508Updated this week
- Named tensors with first-class dimensions for PyTorch☆320Updated last year
- JAX-Toolbox☆299Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆390Updated 2 weeks ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,040Updated last year
- TensorDict is a pytorch dedicated tensor container.☆911Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆534Updated this week
- Annotated version of the Mamba paper☆483Updated last year
- Pipeline Parallelism for PyTorch☆764Updated 8 months ago
- ☆216Updated 9 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆375Updated last week
- Implementation of Flash Attention in Jax☆206Updated last year
- A Jax-based library for designing and training small transformers.☆286Updated 7 months ago
- ☆295Updated last week
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆229Updated 3 months ago
- Inference code for LLaMA models in JAX☆118Updated 11 months ago
- Implementation of a Transformer, but completely in Triton☆263Updated 3 years ago
- ☆424Updated 9 months ago
- ☆186Updated last week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆991Updated last month
- Helpful tools and examples for working with flex-attention☆726Updated 2 weeks ago
- ☆166Updated last year
- Building blocks for foundation models.☆482Updated last year