rwitten / HighPerfLLMs2024Links
☆526Updated last year
Alternatives and similar repositories for HighPerfLLMs2024
Users that are interested in HighPerfLLMs2024 are comparing it to the libraries listed below
Sorting:
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆523Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆643Updated this week
- What would you do with 1000 H100s...☆1,087Updated last year
- Building blocks for foundation models.☆532Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆812Updated 3 weeks ago
- ☆275Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆568Updated last week
- Puzzles for exploring transformers☆366Updated 2 years ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆526Updated last week
- PyTorch Single Controller☆361Updated last week
- Best practices & guides on how to write distributed pytorch training code☆467Updated 6 months ago
- ☆211Updated 6 months ago
- ☆444Updated 10 months ago
- ☆380Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆137Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆383Updated last week
- seqax = sequence modeling + JAX☆166Updated last month
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆527Updated this week
- Minimal yet performant LLM examples in pure JAX☆148Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆190Updated 2 months ago
- Annotated version of the Mamba paper☆487Updated last year
- ☆324Updated 3 weeks ago
- Puzzles for learning Triton☆1,925Updated 9 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆301Updated this week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆389Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆260Updated 3 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,673Updated last month
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆386Updated 5 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- ☆162Updated last year