ai-compiler-study / triton-kernels
Triton kernels for Flux
☆20Updated 2 months ago
Alternatives and similar repositories for triton-kernels:
Users that are interested in triton-kernels are comparing it to the libraries listed below
- Writing FLUX in Triton☆32Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 7 months ago
- Make triton easier☆47Updated 9 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆27Updated 11 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆54Updated 3 months ago
- ☆33Updated 6 months ago
- DPO, but faster 🚀☆40Updated 3 months ago
- Hacks for PyTorch☆18Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆47Updated 2 weeks ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆37Updated 10 months ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆12Updated 3 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆73Updated 7 months ago
- ☆21Updated 3 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Updated 3 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆103Updated this week
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- ☆94Updated 9 months ago
- ☆75Updated 8 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 months ago
- ☆21Updated last week
- A dashboard for exploring timm learning rate schedulers☆19Updated 3 months ago
- ☆21Updated 8 months ago