AI-Hypercomputer / maxdiffusionLinks
☆230Updated this week
Alternatives and similar repositories for maxdiffusion
Users that are interested in maxdiffusion are comparing it to the libraries listed below
Sorting:
- Google TPU optimizations for transformers models☆114Updated 5 months ago
- ☆142Updated this week
- ☆186Updated last month
- JAX implementation of the Llama 2 model☆219Updated last year
- Scalable and Performant Data Loading☆288Updated this week
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆64Updated 3 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆513Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆359Updated 2 weeks ago
- JAX-Toolbox☆321Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆354Updated last month
- Modular, scalable library to train ML models☆135Updated this week
- Implementation of Flash Attention in Jax☆213Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- ☆79Updated last year
- ☆303Updated last year
- Load compute kernels from the Hub☆203Updated this week
- A simple library for scaling up JAX programs☆139Updated 8 months ago
- PyTorch Single Controller☆318Updated this week
- ☆320Updated 2 weeks ago
- Focused on fast experimentation and simplicity☆76Updated 6 months ago
- DeMo: Decoupled Momentum Optimization☆189Updated 7 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆405Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆255Updated this week
- Implementation of the Llama architecture with RLHF + Q-learning☆165Updated 5 months ago
- Inference code for LLaMA models in JAX☆118Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 10 months ago
- ☆273Updated last year
- Efficient optimizers☆232Updated last week
- Faster generation with text-to-image diffusion models.☆219Updated 2 weeks ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆128Updated 3 months ago