mikex86 / tritoncLinks
Standalone commandline CLI tool for compiling Triton kernels
☆18Updated last year
Alternatives and similar repositories for tritonc
Users that are interested in tritonc are comparing it to the libraries listed below
Sorting:
- Make triton easier☆47Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated this week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Simple high-throughput inference library☆142Updated 4 months ago
- ☆21Updated 7 months ago
- ☆22Updated 10 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆126Updated 3 weeks ago
- See https://github.com/cuda-mode/triton-index/ instead!☆10Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated last year
- train with kittens!☆62Updated 11 months ago
- A collection of reproducible inference engine benchmarks☆33Updated 5 months ago
- Jax like function transformation engine but micro, microjax☆32Updated 11 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- ☆49Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- ☆18Updated last year
- Latent Diffusion Language Models☆68Updated 2 years ago
- [WIP] Better (FP8) attention for Hopper☆33Updated 7 months ago
- ☆22Updated 5 months ago
- SIMD quantization kernels☆87Updated last month
- A tracing JIT compiler for PyTorch☆13Updated 3 years ago
- Utilities for Training Very Large Models☆58Updated last year
- Simplex Random Feature attention, in PyTorch☆73Updated 2 years ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- ☆62Updated 3 years ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated 2 weeks ago
- A really tiny autograd engine☆95Updated 4 months ago
- Experiments for efforts to train a new and improved t5☆75Updated last year