AI-Hypercomputer / cloud-accelerator-diagnosticsLinks
☆22Updated last week
Alternatives and similar repositories for cloud-accelerator-diagnostics
Users that are interested in cloud-accelerator-diagnostics are comparing it to the libraries listed below
Sorting:
- ☆142Updated last week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆515Updated last week
- ☆322Updated 3 weeks ago
- ☆187Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆407Updated last month
- JAX Synergistic Memory Inspector☆176Updated last year
- ☆136Updated last week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆66Updated 3 months ago
- Implementation of Flash Attention in Jax☆214Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆365Updated last week
- ☆113Updated last year
- seqax = sequence modeling + JAX☆165Updated this week
- JAX-Toolbox☆324Updated this week
- A JAX-native LLM Post-Training Library☆70Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆625Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆359Updated last month
- ☆514Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆281Updated 2 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆256Updated 2 weeks ago
- ☆14Updated 2 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆136Updated last year
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆132Updated this week
- ☆274Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- A library for unit scaling in PyTorch☆128Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆207Updated last week
- JAX bindings for Flash Attention v2☆90Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆435Updated this week
- A simple library for scaling up JAX programs☆139Updated 8 months ago