AI-Hypercomputer / cloud-accelerator-diagnostics
☆19Updated 4 months ago
Alternatives and similar repositories for cloud-accelerator-diagnostics:
Users that are interested in cloud-accelerator-diagnostics are comparing it to the libraries listed below
- ☆273Updated 6 months ago
- ☆181Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆47Updated this week
- ☆130Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆475Updated last week
- Accelerated First Order Parallel Associative Scan☆170Updated 5 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆371Updated this week
- A simple library for scaling up JAX programs☆129Updated 2 months ago
- ☆278Updated last week
- ☆203Updated 6 months ago
- Implementation of Flash Attention in Jax☆204Updated 10 months ago
- JAX-Toolbox☆279Updated this week
- seqax = sequence modeling + JAX☆136Updated 6 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆97Updated this week
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 6 months ago
- JAX Synergistic Memory Inspector☆165Updated 6 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆116Updated last year
- PyTorch per step fault tolerance (actively under development)☆226Updated this week
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆105Updated this week
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆240Updated this week
- This is a port of Mistral-7B model in JAX☆30Updated 6 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆268Updated this week
- Orbax provides common checkpointing and persistence utilities for JAX users☆328Updated this week
- Experiment of using Tangent to autodiff triton☆74Updated last year
- Implementation of a Transformer, but completely in Triton☆253Updated 2 years ago
- ☆13Updated 6 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆79Updated 2 years ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆216Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- Google TPU optimizations for transformers models☆90Updated last week