gpu-mode / discord-cluster-managerLinks
Write a fast kernel and run it on Discord. See how you compare against the best!
☆58Updated last week
Alternatives and similar repositories for discord-cluster-manager
Users that are interested in discord-cluster-manager are comparing it to the libraries listed below
Sorting:
- How to ensure correctness and ship LLM generated kernels in PyTorch☆66Updated last week
- ring-attention experiments☆154Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆145Updated 2 years ago
- Quantized LLM training in pure CUDA/C++.☆206Updated this week
- Collection of kernels written in Triton language☆157Updated 6 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- A bunch of kernels that might make stuff slower 😉☆62Updated this week
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- ☆240Updated this week
- Learn CUDA with PyTorch☆92Updated last month
- Cataloging released Triton kernels.☆263Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- 👷 Build compute kernels☆163Updated this week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- train with kittens!☆63Updated 11 months ago
- ☆28Updated 9 months ago
- Explore training for quantized models☆25Updated 3 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆133Updated last month
- Parallel framework for training and fine-tuning deep neural networks☆65Updated 7 months ago
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆68Updated 3 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆76Updated last month
- Learning about CUDA by writing PTX code.☆144Updated last year
- Samples of good AI generated CUDA kernels☆91Updated 4 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆420Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆160Updated last week
- ☆42Updated last month