gpu-mode / discord-cluster-managerLinks
Write a fast kernel and run it on Discord. See how you compare against the best!
☆55Updated this week
Alternatives and similar repositories for discord-cluster-manager
Users that are interested in discord-cluster-manager are comparing it to the libraries listed below
Sorting:
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆139Updated last year
- How to ship your LLM generated kernels to PyTorch☆27Updated last week
- Collection of kernels written in Triton language☆154Updated 5 months ago
- ring-attention experiments☆150Updated 10 months ago
- extensible collectives library in triton☆88Updated 5 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆85Updated this week
- A bunch of kernels that might make stuff slower 😉☆58Updated 2 weeks ago
- ☆39Updated last month
- Fast low-bit matmul kernels in Triton☆357Updated this week
- ☆233Updated 3 weeks ago
- ☆28Updated 7 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆100Updated 7 months ago
- A parallel framework for training deep neural networks☆63Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- 👷 Build compute kernels☆136Updated this week
- Cataloging released Triton kernels.☆252Updated this week
- train with kittens!☆62Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- PyTorch Single Controller☆414Updated this week
- ☆217Updated 7 months ago
- Load compute kernels from the Hub☆271Updated this week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆63Updated 2 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆219Updated 4 months ago
- ☆89Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆120Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Learn CUDA with PyTorch☆74Updated this week
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer(WIP) for Triton Kernels☆148Updated last week
- Explore training for quantized models☆24Updated 2 months ago