AI-Hypercomputer / jetstream-pytorchLinks
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
☆60Updated 2 months ago
Alternatives and similar repositories for jetstream-pytorch
Users that are interested in jetstream-pytorch are comparing it to the libraries listed below
Sorting:
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆334Updated this week
- Google TPU optimizations for transformers models☆112Updated 4 months ago
- extensible collectives library in triton☆87Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆271Updated this week
- ☆138Updated 2 weeks ago
- Fast low-bit matmul kernels in Triton☆303Updated last week
- ☆33Updated this week
- ☆71Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- Load compute kernels from the Hub☆139Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆196Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- Collection of kernels written in Triton language☆125Updated last month
- PyTorch per step fault tolerance (actively under development)☆302Updated last week
- ☆99Updated this week
- ☆186Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆351Updated 3 weeks ago
- ☆105Updated 9 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆249Updated this week
- ☆21Updated 3 months ago
- ☆310Updated last week
- ☆210Updated this week
- A bunch of kernels that might make stuff slower 😉☆46Updated this week
- ☆79Updated 6 months ago
- ☆210Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆75Updated 8 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆105Updated this week
- ☆44Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆133Updated last year