google / saxml
☆121Updated this week
Related projects ⓘ
Alternatives and complementary repositories for saxml
- ☆177Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆228Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆456Updated last week
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆64Updated 2 months ago
- ☆223Updated 3 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆40Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆340Updated last week
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆80Updated this week
- JAX-Toolbox☆241Updated this week
- ☆155Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆146Updated this week
- ☆262Updated this week
- seqax = sequence modeling + JAX☆132Updated 3 months ago
- Inference code for LLaMA models in JAX☆112Updated 5 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆163Updated this week
- ☆64Updated 2 years ago
- Implementation of Flash Attention in Jax☆194Updated 8 months ago
- Implementation of a Transformer, but completely in Triton☆248Updated 2 years ago
- Train very large language models in Jax.☆195Updated last year
- Google TPU optimizations for transformers models☆74Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- ☆140Updated this week
- Orbax provides common checkpointing and persistence utilities for JAX users☆296Updated this week
- Applied AI experiments and examples for PyTorch☆159Updated last week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆95Updated last year
- extensible collectives library in triton☆61Updated last month
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆98Updated last month
- JAX implementation of the Llama 2 model☆210Updated 9 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆190Updated 2 weeks ago
- PyTorch RFCs (experimental)☆127Updated 2 months ago