AI-Hypercomputer / JetStream
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
☆297Updated this week
Alternatives and similar repositories for JetStream:
Users that are interested in JetStream are comparing it to the libraries listed below
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆54Updated last month
- ☆137Updated this week
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆107Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆483Updated last week
- ☆290Updated this week
- ☆203Updated last month
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆102Updated this week
- ☆197Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆220Updated last month
- Google TPU optimizations for transformers models☆103Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆249Updated this week
- PyTorch per step fault tolerance (actively under development)☆266Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆189Updated this week
- Fast low-bit matmul kernels in Triton☆267Updated this week
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆64Updated 4 months ago
- ☆184Updated last month
- ☆191Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆234Updated this week
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆43Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆354Updated this week
- ☆173Updated last week
- JAX-Toolbox☆289Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆232Updated 2 weeks ago
- ☆407Updated 8 months ago
- CUDA checkpoint and restore utility☆310Updated last month
- extensible collectives library in triton☆84Updated 6 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆177Updated this week
- ☆73Updated 4 months ago
- A library to analyze PyTorch traces.☆348Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago