ayaka14732 / tpu-starterLinks
Everything you want to know about Google Cloud TPU
☆555Updated last year
Alternatives and similar repositories for tpu-starter
Users that are interested in tpu-starter are comparing it to the libraries listed below
Sorting:
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆690Updated this week
- JAX Synergistic Memory Inspector☆183Updated last year
- JAX implementation of the Llama 2 model☆215Updated last year
- For optimization algorithm research and development.☆556Updated 3 weeks ago
- Annotated version of the Mamba paper☆494Updated last year
- ☆366Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆406Updated last week
- Inference code for LLaMA models in JAX☆120Updated last year
- Implementation of Flash Attention in Jax☆223Updated last year
- ☆287Updated last year
- Puzzles for exploring transformers☆382Updated 2 years ago
- jax-triton contains integrations between JAX and OpenAI Triton☆436Updated last month
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆542Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆592Updated 5 months ago
- ☆463Updated last year
- Implementation of a Transformer, but completely in Triton☆278Updated 3 years ago
- ☆191Updated 3 weeks ago
- ☆551Updated last year
- What would you do with 1000 H100s...☆1,143Updated 2 years ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆328Updated last week
- ☆342Updated last week
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Updated last year
- ☆314Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆383Updated last year
- ☆233Updated 11 months ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆297Updated last year
- Language Modeling with the H3 State Space Model☆521Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Updated last year
- Efficient optimizers☆280Updated 3 weeks ago