AI-Hypercomputer / tpu-recipesLinks
☆73Updated last week
Alternatives and similar repositories for tpu-recipes
Users that are interested in tpu-recipes are comparing it to the libraries listed below
Sorting:
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated last month
- ☆152Updated last month
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆404Updated last month
- Google TPU optimizations for transformers models☆134Updated 2 weeks ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆169Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆71Updated this week
- MoE training for Me and You and maybe other people☆335Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- A set of Python scripts that makes your experience on TPU better☆56Updated 4 months ago
- Load compute kernels from the Hub☆397Updated this week
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆114Updated last week
- a Jax quantization library☆90Updated this week
- TPU inference for vLLM, with unified JAX and PyTorch support.☆228Updated this week
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- Minimal yet performant LLM examples in pure JAX☆240Updated 3 weeks ago
- 👷 Build compute kernels☆215Updated 2 weeks ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆336Updated last week
- ☆92Updated last year
- Package of Pathways-on-Cloud utilities☆23Updated this week
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- ☆192Updated last week
- torchax is a PyTorch frontend for JAX. It gives JAX the ability to author JAX programs using familiar PyTorch syntax. It also provides JA…☆175Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- ☆21Updated 11 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆124Updated last month
- ML/DL Math and Method notes☆66Updated 2 years ago
- ☆344Updated this week
- LM engine is a library for pretraining/finetuning LLMs☆113Updated this week