AI-Hypercomputer / kitharaLinks
☆14Updated 3 months ago
Alternatives and similar repositories for kithara
Users that are interested in kithara are comparing it to the libraries listed below
Sorting:
- torchprime is a reference model implementation for PyTorch on TPU.☆36Updated this week
- ☆15Updated 4 months ago
- ☆146Updated last month
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆71Updated 5 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆375Updated 3 months ago
- ☆23Updated 2 weeks ago
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆141Updated this week
- ☆188Updated 2 weeks ago
- ☆45Updated 3 weeks ago
- A JAX-native LLM Post-Training Library☆143Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆535Updated 2 weeks ago
- Recipes for reproducing training and serving benchmarks for large machine learning models using GPUs on Google Cloud.☆84Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆388Updated this week
- ☆21Updated this week
- Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)☆65Updated 3 months ago
- ☆534Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆401Updated 2 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- ☆330Updated this week
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- Load compute kernels from the Hub☆283Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆658Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆211Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆100Updated this week
- Scalable and Performant Data Loading☆299Updated this week
- Minimal yet performant LLM examples in pure JAX☆158Updated this week
- ☆16Updated 6 months ago
- ☆261Updated this week