gpu-mode / triton-tutorialsLinks
☆14Updated 2 months ago
Alternatives and similar repositories for triton-tutorials
Users that are interested in triton-tutorials are comparing it to the libraries listed below
Sorting:
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Make triton easier☆47Updated last year
- Hacks for PyTorch☆19Updated 2 years ago
- Personal solutions to the Triton Puzzles☆19Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last week
- ☆33Updated last month
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆71Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆72Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆152Updated last month
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- DPO, but faster 🚀☆44Updated 8 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆39Updated 3 weeks ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆32Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ring-attention experiments☆146Updated 9 months ago
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆35Updated last year
- ☆158Updated last year
- ☆53Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 10 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ACL 2023☆39Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 weeks ago
- ☆29Updated 2 years ago
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- A bunch of kernels that might make stuff slower 😉☆56Updated last week
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆17Updated 2 months ago
- ☆22Updated 3 months ago