mikex86 / tritoncLinks
Standalone commandline CLI tool for compiling Triton kernels
☆20Updated last year
Alternatives and similar repositories for tritonc
Users that are interested in tritonc are comparing it to the libraries listed below
Sorting:
- Make triton easier☆50Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆70Updated 2 weeks ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Simple high-throughput inference library☆155Updated 8 months ago
- ☆21Updated 10 months ago
- Experiment of using Tangent to autodiff triton☆81Updated 2 years ago
- Jax like function transformation engine but micro, microjax☆34Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- A tracing JIT compiler for PyTorch☆13Updated 4 years ago
- Utilities for Training Very Large Models☆58Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated this week
- ☆29Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- train with kittens!☆63Updated last year
- ☆23Updated 8 months ago
- ☆18Updated last year
- RWKV model implementation☆38Updated 2 years ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated last year
- ☆24Updated last year
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆107Updated last month
- [WIP] Better (FP8) attention for Hopper☆32Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- ☆92Updated last year
- Learning about CUDA by writing PTX code.☆151Updated last year
- ☆18Updated last year
- ☆34Updated last year