AI-Hypercomputer / cloud-accelerator-diagnostics
☆20Updated 3 weeks ago
Alternatives and similar repositories for cloud-accelerator-diagnostics:
Users that are interested in cloud-accelerator-diagnostics are comparing it to the libraries listed below
- ☆184Updated last month
- A simple library for scaling up JAX programs☆134Updated 4 months ago
- ☆137Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆55Updated last month
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆484Updated last week
- JAX-Toolbox☆289Updated this week
- ☆87Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆386Updated last week
- ☆215Updated 8 months ago
- JAX Synergistic Memory Inspector☆171Updated 8 months ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆351Updated this week
- Inference code for LLaMA models in JAX☆116Updated 10 months ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆107Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆299Updated this week
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 8 months ago
- ☆290Updated this week
- Train very large language models in Jax.☆203Updated last year
- ☆409Updated 8 months ago
- seqax = sequence modeling + JAX☆150Updated last week
- Implementation of Flash Attention in Jax☆206Updated last year
- JAX implementation of the Llama 2 model☆216Updated last year
- Experiment of using Tangent to autodiff triton☆78Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆255Updated this week
- LoRA for arbitrary JAX models and functions☆135Updated last year
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 5 months ago
- extensible collectives library in triton☆84Updated 6 months ago
- Google TPU optimizations for transformers models☆104Updated 2 months ago
- A set of Python scripts that makes your experience on TPU better☆50Updated 8 months ago
- If it quacks like a tensor...☆57Updated 4 months ago