Quentin-Anthony / nanoMPILinks
Simple MPI implementation for prototyping or learning
☆300Updated 5 months ago
Alternatives and similar repositories for nanoMPI
Users that are interested in nanoMPI are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- Learn CUDA with PyTorch☆179Updated last month
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆331Updated 2 months ago
- ☆178Updated last year
- Quantized LLM training in pure CUDA/C++.☆232Updated last week
- Learning about CUDA by writing PTX code.☆151Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆453Updated 10 months ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆813Updated last week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆154Updated 2 years ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆594Updated 5 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆189Updated this week
- Dion optimizer algorithm☆419Updated this week
- Helpful kernel tutorials and examples for tile-based GPU programming☆568Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- ☆224Updated last month
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆248Updated 8 months ago
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆144Updated last year
- Where GPUs get cooked 👩🍳🔥☆353Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆469Updated last week
- UNet diffusion model in pure CUDA☆661Updated last year
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆763Updated this week
- ☆271Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆67Updated this week
- Alex Krizhevsky's original code from Google Code☆198Updated 9 years ago
- PyTorch-native post-training at scale☆595Updated this week
- ring-attention experiments☆161Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆711Updated last week
- GPU documentation for humans☆507Updated last week
- ☆233Updated last year
- Fastest kernels written from scratch☆523Updated 4 months ago