AI-Hypercomputer / cloud-accelerator-diagnosticsLinks
☆26Updated last month
Alternatives and similar repositories for cloud-accelerator-diagnostics
Users that are interested in cloud-accelerator-diagnostics are comparing it to the libraries listed below
Sorting:
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- ☆345Updated last week
- ☆192Updated this week
- ☆152Updated last month
- ☆562Updated last year
- Tokamax: A GPU and TPU kernel library.☆169Updated last week
- Minimal yet performant LLM examples in pure JAX☆236Updated 3 weeks ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated last week
- ☆304Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆437Updated last month
- JAX-Toolbox☆382Updated this week
- JAX Synergistic Memory Inspector☆184Updated last year
- Implementation of Flash Attention in Jax☆225Updated last year
- Library for reading and processing ML training data.☆677Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆475Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆279Updated 2 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆79Updated last month
- ☆289Updated last year
- JAX bindings for Flash Attention v2☆103Updated this week
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆162Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Implementation of a Transformer, but completely in Triton☆279Updated 3 years ago
- Pipeline Parallelism for PyTorch☆784Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆404Updated last month
- Everything you want to know about Google Cloud TPU☆560Updated last year
- Orbax provides common checkpointing and persistence utilities for JAX users☆479Updated this week
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- torchprime is a reference model implementation for PyTorch on TPU.☆44Updated last month