ayaka14732 / tpu-starterLinks
Everything you want to know about Google Cloud TPU
☆560Updated last year
Alternatives and similar repositories for tpu-starter
Users that are interested in tpu-starter are comparing it to the libraries listed below
Sorting:
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- Puzzles for exploring transformers☆386Updated 2 years ago
- JAX Synergistic Memory Inspector☆184Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated 2 weeks ago
- ☆367Updated last year
- For optimization algorithm research and development.☆558Updated last month
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- jax-triton contains integrations between JAX and OpenAI Triton☆439Updated this week
- Implementation of Flash Attention in Jax☆225Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆406Updated last week
- ☆494Updated last year
- Implementation of a Transformer, but completely in Triton☆279Updated 3 years ago
- Annotated version of the Mamba paper☆496Updated last year
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆344Updated this week
- ☆192Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 6 months ago
- What would you do with 1000 H100s...☆1,151Updated 2 years ago
- ☆291Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆300Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆336Updated last week
- Pipeline Parallelism for PyTorch☆784Updated last year
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- ☆177Updated 2 years ago
- CLU lets you write beautiful training loops in JAX.☆366Updated 3 weeks ago
- ☆562Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated last year
- ☆316Updated last year
- Task-based datasets, preprocessing, and evaluation for sequence models.☆594Updated last week
- ☆167Updated 2 years ago