AI-Hypercomputer / tpu-recipesLinks
☆54Updated this week
Alternatives and similar repositories for tpu-recipes
Users that are interested in tpu-recipes are comparing it to the libraries listed below
Sorting:
- Google TPU optimizations for transformers models☆123Updated 10 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆78Updated 2 months ago
- ☆148Updated 3 weeks ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆154Updated this week
- Load compute kernels from the Hub☆337Updated last week
- A set of Python scripts that makes your experience on TPU better☆54Updated 2 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆196Updated 6 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆392Updated 5 months ago
- 👷 Build compute kernels☆190Updated this week
- ☆21Updated 9 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 3 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated this week
- ☆190Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆101Updated last week
- Various transformers for FSDP research☆38Updated 3 years ago
- train with kittens!☆63Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆313Updated last month
- ☆47Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 8 months ago
- a Jax quantization library☆68Updated last week
- TPU inference for vLLM, with unified JAX and PyTorch support.☆170Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆454Updated 3 weeks ago
- ☆337Updated 2 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆194Updated this week
- seqax = sequence modeling + JAX☆168Updated 4 months ago
- ☆91Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆325Updated this week
- Inference code for LLaMA models in JAX☆120Updated last year