jax-ml / jax-triton
jax-triton contains integrations between JAX and OpenAI Triton
☆384Updated this week
Alternatives and similar repositories for jax-triton:
Users that are interested in jax-triton are comparing it to the libraries listed below
- Orbax provides common checkpointing and persistence utilities for JAX users☆348Updated this week
- JMP is a Mixed Precision library for JAX.☆193Updated last month
- ☆289Updated last week
- JAX-Toolbox☆287Updated this week
- Library for reading and processing ML training data.☆407Updated this week
- JAX Synergistic Memory Inspector☆170Updated 8 months ago
- CLU lets you write beautiful training loops in JAX.☆335Updated 2 weeks ago
- Implementation of Flash Attention in Jax☆206Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆483Updated last week
- ☆184Updated last month
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆555Updated this week
- Named Tensors for Legible Deep Learning in JAX☆167Updated this week
- seqax = sequence modeling + JAX☆148Updated last week
- OpTree: Optimized PyTree Utilities☆172Updated this week
- ☆214Updated 8 months ago
- A simple library for scaling up JAX programs☆134Updated 4 months ago
- LoRA for arbitrary JAX models and functions☆135Updated last year
- ☆161Updated 9 months ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆108Updated 3 weeks ago
- Extending JAX with custom C++ and CUDA code☆387Updated 7 months ago
- Implementation of a Transformer, but completely in Triton☆260Updated 2 years ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆255Updated last week
- A library for unit scaling in PyTorch☆124Updated 3 months ago
- ☆220Updated last month
- Run PyTorch in JAX. 🤝☆231Updated last month
- ☆112Updated last month
- ☆138Updated this week
- ☆342Updated 11 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆370Updated this week