AI-Hypercomputer / JetStreamLinks
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
☆349Updated last week
Alternatives and similar repositories for JetStream
Users that are interested in JetStream are comparing it to the libraries listed below
Sorting:
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆61Updated 2 months ago
- ☆141Updated 2 weeks ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆510Updated last week
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆123Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆177Updated 2 weeks ago
- ☆317Updated this week
- PyTorch per step fault tolerance (actively under development)☆329Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆240Updated last week
- Google TPU optimizations for transformers models☆113Updated 5 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆64Updated this week
- ☆222Updated this week
- Applied AI experiments and examples for PyTorch☆277Updated 3 weeks ago
- ☆212Updated 4 months ago
- ☆34Updated last week
- ☆504Updated 11 months ago
- A simplified and automated orchestration workflow to perform ML end-to-end (E2E) model tests and benchmarking on Cloud VMs across differe…☆48Updated this week
- Fast low-bit matmul kernels in Triton☆322Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆204Updated this week
- A tool to configure, launch and manage your machine learning experiments.☆161Updated this week
- ☆186Updated 2 weeks ago
- Perplexity GPU Kernels☆364Updated last week
- A library to analyze PyTorch traces.☆391Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆367Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆253Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆413Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 6 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆415Updated 3 weeks ago
- ☆219Updated this week