AI-Hypercomputer / maxdiffusion
☆194Updated this week
Alternatives and similar repositories for maxdiffusion:
Users that are interested in maxdiffusion are comparing it to the libraries listed below
- JAX implementation of the Llama 2 model☆216Updated last year
- ☆136Updated 2 weeks ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆290Updated this week
- Google TPU optimizations for transformers models☆102Updated last month
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆482Updated this week
- Scalable and Performant Data Loading☆224Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆122Updated 10 months ago
- ☆184Updated 3 weeks ago
- Focused on fast experimentation and simplicity☆69Updated 2 months ago
- ☆301Updated 8 months ago
- ☆75Updated 8 months ago
- ☆212Updated 8 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆53Updated last month
- Implementation of Diffusion Transformer (DiT) in JAX☆267Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆382Updated this week
- supporting pytorch FSDP for optimizers☆79Updated 3 months ago
- ☆202Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆227Updated last week
- Efficient optimizers☆183Updated last week
- Faster generation with text-to-image diffusion models.☆210Updated 5 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆506Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆100Updated 3 months ago
- Inference code for LLaMA models in JAX☆116Updated 9 months ago
- ☆288Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆305Updated 3 months ago
- WIP☆93Updated 7 months ago
- seqax = sequence modeling + JAX☆148Updated 2 weeks ago
- PyTorch per step fault tolerance (actively under development)☆265Updated this week