rwitten / HighPerfLLMs2024
☆419Updated 9 months ago
Alternatives and similar repositories for HighPerfLLMs2024:
Users that are interested in HighPerfLLMs2024 are comparing it to the libraries listed below
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆237Updated 2 weeks ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆566Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆529Updated last month
- seqax = sequence modeling + JAX☆153Updated last week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆488Updated this week
- What would you do with 1000 H100s...☆1,035Updated last year
- ☆215Updated 9 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆987Updated last month
- Best practices & guides on how to write distributed pytorch training code☆391Updated last month
- Puzzles for exploring transformers☆342Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆270Updated 10 months ago
- ☆198Updated this week
- ☆428Updated 5 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆129Updated last year
- Building blocks for foundation models.☆477Updated last year
- Puzzles for learning Triton☆1,566Updated 4 months ago
- ☆166Updated 2 months ago
- ☆295Updated this week
- Annotated version of the Mamba paper☆481Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆263Updated this week
- ☆153Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆313Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆389Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆173Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆240Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆634Updated last month
- Helpful tools and examples for working with flex-attention☆720Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆510Updated 5 months ago
- Cataloging released Triton kernels.☆216Updated 3 months ago