AI-Hypercomputer / tpu-recipesLinks
☆49Updated last week
Alternatives and similar repositories for tpu-recipes
Users that are interested in tpu-recipes are comparing it to the libraries listed below
Sorting:
- ☆145Updated last week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆76Updated last month
- A set of Python scripts that makes your experience on TPU better☆54Updated last month
- Google TPU optimizations for transformers models☆120Updated 9 months ago
- Load compute kernels from the Hub☆304Updated last week
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆147Updated this week
- ☆190Updated last month
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆385Updated 4 months ago
- ☆335Updated last month
- An implementation of the Llama architecture, to instruct and delight☆21Updated 4 months ago
- 👷 Build compute kernels☆163Updated last week
- torchprime is a reference model implementation for PyTorch on TPU.☆39Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- ☆21Updated 7 months ago
- Inference code for LLaMA models in JAX☆119Updated last year
- PyTorch centric eager mode debugger☆48Updated 10 months ago
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆436Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated this week
- LM engine is a library for pretraining/finetuning LLMs☆72Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆317Updated last week
- ☆121Updated last year
- a Jax quantization library☆52Updated this week
- ☆91Updated last year
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆93Updated this week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Pytorch DTensor native training library for LLMs/VLMs with OOTB Hugging Face support☆135Updated this week
- ☆46Updated last year