AI-Hypercomputer / JetStreamLinks
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
☆404Updated last month
Alternatives and similar repositories for JetStream
Users that are interested in JetStream are comparing it to the libraries listed below
Sorting:
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated last month
- ☆152Updated last month
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆162Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆548Updated 3 weeks ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆228Updated this week
- ☆345Updated last week
- Module, Model, and Tensor Serialization/Deserialization☆286Updated 5 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆255Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆474Updated 3 weeks ago
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆112Updated last week
- A library to analyze PyTorch traces.☆462Updated this week
- Google TPU optimizations for transformers models☆135Updated 2 weeks ago
- ☆558Updated last year
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆412Updated this week
- ☆280Updated this week
- A tool to configure, launch and manage your machine learning experiments.☆216Updated this week
- ☆304Updated last week
- Perplexity open source garden for inference technology☆359Updated last month
- ☆219Updated last year
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆64Updated last month
- ☆72Updated last week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated this week
- Fast low-bit matmul kernels in Triton☆427Updated this week
- CUDA checkpoint and restore utility☆410Updated 4 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆205Updated this week
- ☆322Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆279Updated 2 months ago
- PyTorch Single Controller☆957Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 months ago
- torchcomms: a modern PyTorch communications API☆327Updated this week