AI-Hypercomputer / jetstream-pytorchLinks
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
☆64Updated 3 months ago
Alternatives and similar repositories for jetstream-pytorch
Users that are interested in jetstream-pytorch are comparing it to the libraries listed below
Sorting:
- Google TPU optimizations for transformers models☆114Updated 5 months ago
- ☆142Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆205Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆354Updated last month
- ☆214Updated 5 months ago
- Load compute kernels from the Hub☆203Updated this week
- ☆21Updated 4 months ago
- extensible collectives library in triton☆87Updated 3 months ago
- ring-attention experiments☆144Updated 8 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆156Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆135Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated 11 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆46Updated 2 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 10 months ago
- Fast low-bit matmul kernels in Triton☆330Updated this week
- ☆106Updated 10 months ago
- Applied AI experiments and examples for PyTorch☆281Updated last month
- Learn CUDA with PyTorch☆29Updated this week
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆51Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆255Updated this week
- A bunch of kernels that might make stuff slower 😉☆54Updated this week
- ☆225Updated this week
- ☆45Updated last year
- Collection of kernels written in Triton language☆136Updated 3 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆359Updated 2 weeks ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆173Updated this week