google / saxmlLinks
☆146Updated last week
Alternatives and similar repositories for saxml
Users that are interested in saxml are comparing it to the libraries listed below
Sorting:
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆536Updated last month
- ☆188Updated last week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆379Updated 3 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆143Updated this week
- ☆331Updated 3 weeks ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆73Updated 3 weeks ago
- JAX-Toolbox☆343Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆426Updated last month
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆45Updated last month
- ☆262Updated last week
- Implementation of Flash Attention in Jax☆219Updated last year
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆302Updated this week
- seqax = sequence modeling + JAX☆167Updated 2 months ago
- Train very large language models in Jax.☆209Updated last year
- JAX implementation of the Llama 2 model☆218Updated last year
- Implementation of a Transformer, but completely in Triton☆275Updated 3 years ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆428Updated this week
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆101Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆392Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 weeks ago
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆65Updated 3 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆667Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆414Updated this week
- ☆538Updated last year
- Minimal yet performant LLM examples in pure JAX☆177Updated last week
- JAX Synergistic Memory Inspector☆180Updated last year
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆123Updated 2 weeks ago
- ☆363Updated last year
- ☆23Updated last week