vllm-project / tpu-inferenceView external linksLinks
TPU inference for vLLM, with unified JAX and PyTorch support.
☆231Updated this week
Alternatives and similar repositories for tpu-inference
Users that are interested in tpu-inference are comparing it to the libraries listed below
Sorting:
- vLLM performance dashboard☆41Apr 26, 2024Updated last year
- ☆30Feb 4, 2026Updated last week
- ☆307Updated this week
- Minimal yet performant LLM examples in pure JAX☆240Jan 14, 2026Updated last month
- ☆15May 11, 2025Updated 9 months ago
- ☆13Updated this week
- ☆18Jun 18, 2025Updated 7 months ago
- Kernel Library Wheel for SGLang☆17Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆407Jan 5, 2026Updated last month
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Nov 18, 2024Updated last year
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Oct 6, 2023Updated 2 years ago
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆115Updated this week
- Paper-reading notes for Berkeley OS prelim exam.☆14Aug 28, 2024Updated last year
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated last year
- Triton kernels for Flux☆22Jul 7, 2025Updated 7 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆220Updated this week
- Multi-Turn RL Training System with AgentTrainer for Language Model Game Reinforcement Learning☆59Dec 18, 2025Updated last month
- Google TPU optimizations for transformers models☆134Jan 23, 2026Updated 3 weeks ago
- Tokamax: A GPU and TPU kernel library.☆170Updated this week
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- a Jax quantization library☆90Updated this week
- ☆23Aug 21, 2025Updated 5 months ago
- ☆20Nov 23, 2022Updated 3 years ago
- Rust crate for some audio utilities☆27Mar 8, 2025Updated 11 months ago
- Website with current metrics on the fastest AI models.☆43Nov 13, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆25Nov 7, 2025Updated 3 months ago
- ☆192Feb 3, 2026Updated last week
- Automatic differentiation for Triton Kernels☆29Aug 12, 2025Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Sep 4, 2024Updated last year
- ☆73Feb 4, 2026Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆683Updated this week
- A practical way of learning Swizzle☆36Feb 3, 2025Updated last year
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆266Updated this week
- A Lightweight LLM Post-Training Library☆2,147Updated this week
- EuroSys '24: "Trinity: A Fast Compressed Multi-attribute Data Store"☆19Mar 8, 2025Updated 11 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆57Feb 2, 2026Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- ☆563Jul 11, 2024Updated last year