rwitten / HighPerfLLMs2024Links
☆545Updated last year
Alternatives and similar repositories for HighPerfLLMs2024
Users that are interested in HighPerfLLMs2024 are comparing it to the libraries listed below
Sorting:
- Building blocks for foundation models.☆569Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆683Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆679Updated this week
- What would you do with 1000 H100s...☆1,121Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆539Updated 2 months ago
- ☆285Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆822Updated 3 months ago
- Puzzles for exploring transformers☆376Updated 2 years ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆582Updated 3 months ago
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆148Updated 2 years ago
- Best practices & guides on how to write distributed pytorch training code☆530Updated 3 weeks ago
- ☆225Updated 3 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆446Updated this week
- Minimal yet performant LLM examples in pure JAX☆198Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 5 months ago
- Open-source framework for the research and development of foundation models.☆600Updated this week
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆321Updated this week
- ☆457Updated last year
- ☆337Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆655Updated this week
- ☆176Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated last week
- Puzzles for learning Triton☆2,105Updated 11 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆388Updated 5 months ago
- For optimization algorithm research and development.☆543Updated this week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆428Updated 8 months ago
- ☆525Updated 3 months ago
- 🧱 Modula software package☆300Updated 2 months ago