gpu-mode / discord-cluster-managerLinks
Write a fast kernel and run it on Discord. See how you compare against the best!
☆46Updated this week
Alternatives and similar repositories for discord-cluster-manager
Users that are interested in discord-cluster-manager are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆86Updated 2 months ago
- Learn CUDA with PyTorch☆25Updated 2 weeks ago
- Collection of kernels written in Triton language☆128Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- A bunch of kernels that might make stuff slower 😉☆51Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- ring-attention experiments☆144Updated 8 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆134Updated last year
- TritonParse is a tool designed to help developers analyze and debug Triton kernels by visualizing the compilation process and source code…☆93Updated this week
- Reference Kernels for the Leaderboard☆59Updated this week
- Fast low-bit matmul kernels in Triton☆322Updated this week
- Personal solutions to the Triton Puzzles☆19Updated 11 months ago
- train with kittens!☆59Updated 7 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆136Updated this week
- Load compute kernels from the Hub☆172Updated this week
- ☆28Updated 5 months ago
- Cataloging released Triton kernels.☆236Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- ☆88Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆186Updated last month
- ☆81Updated 7 months ago
- ☆219Updated this week
- ☆21Updated 3 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆94Updated last month
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 11 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated 3 weeks ago
- Custom kernels in Triton language for accelerating LLMs☆22Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆79Updated 9 months ago
- ☆73Updated 2 months ago
- Explore training for quantized models☆18Updated this week