rwitten / HighPerfLLMs2024Links
☆511Updated last year
Alternatives and similar repositories for HighPerfLLMs2024
Users that are interested in HighPerfLLMs2024 are comparing it to the libraries listed below
Sorting:
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆424Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆607Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆513Updated this week
- ☆273Updated 11 months ago
- Building blocks for foundation models.☆515Updated last year
- What would you do with 1000 H100s...☆1,061Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆134Updated last year
- Puzzles for exploring transformers☆354Updated 2 years ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆801Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆561Updated 3 weeks ago
- Best practices & guides on how to write distributed pytorch training code☆444Updated 4 months ago
- PyTorch Single Controller☆296Updated this week
- ☆198Updated 5 months ago
- seqax = sequence modeling + JAX☆163Updated last month
- ☆440Updated 8 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated last month
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆359Updated last week
- ☆160Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆468Updated this week
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆288Updated this week
- Annotated version of the Mamba paper☆486Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆354Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆255Updated this week
- ☆259Updated this week
- ☆320Updated 2 weeks ago
- Implementation of Diffusion Transformer (DiT) in JAX☆279Updated last year
- ☆225Updated this week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆378Updated 4 months ago
- Puzzles for learning Triton☆1,747Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,566Updated last month