JAX backend for SGL
☆269May 9, 2026Updated this week
Alternatives and similar repositories for sglang-jax
Users that are interested in sglang-jax are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Minimal yet performant LLM examples in pure JAX☆253Apr 10, 2026Updated last month
- Tokamax: A GPU and TPU kernel library.☆213Updated this week
- Tensor Parallelism with JAX + Shard Map☆11Sep 29, 2023Updated 2 years ago
- Turn jitted jax functions back into python source code☆23Dec 16, 2024Updated last year
- ☆96Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆278Feb 2, 2026Updated 3 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated last year
- Convert StableHLO models into Apple Core ML format☆22Apr 16, 2026Updated 3 weeks ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆52Jul 4, 2025Updated 10 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆294Apr 23, 2026Updated 2 weeks ago
- An efficient method for the conversion from internal to Cartesian coordinates that utilizes the platform-agnostic JAX Python library.☆21Jun 12, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- Einsum-like high-level array sharding API for JAX☆34Jul 16, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Tidy autoregressive inference in JAX☆15Sep 1, 2025Updated 8 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆307May 4, 2026Updated last week
- ☆47Sep 8, 2025Updated 8 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆25Sep 29, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,580Updated this week
- Tile-based language built for AI computation across all scales☆146Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆82Dec 18, 2025Updated 4 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆450Apr 23, 2026Updated 2 weeks ago
- ☆52Mar 9, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- ☆144Mar 5, 2026Updated 2 months ago
- Frechet inception distance (FID) evaluation in JAX☆14May 28, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,421Apr 22, 2026Updated 2 weeks ago
- DPO, but faster 🚀☆52Dec 6, 2024Updated last year
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆105Dec 17, 2025Updated 4 months ago
- SGLang Kernel Wheel Index☆22Updated this week
- Scalable toolkit for efficient model reinforcement☆1,627Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆1,022Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Debug print operator for cudagraph debugging☆15Aug 2, 2024Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Nov 28, 2025Updated 5 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆440Updated this week
- slime is an LLM post-training framework for RL Scaling.☆5,548Apr 30, 2026Updated last week
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆62Mar 17, 2026Updated last month
- JAX bindings for the flash-attention3 kernels☆22Jan 2, 2026Updated 4 months ago