daemyung / practice-tritonLinks
삼각형의 실전! Triton
☆16Updated last year
Alternatives and similar repositories for practice-triton
Users that are interested in practice-triton are comparing it to the libraries listed below
Sorting:
- A performance library for machine learning applications.☆185Updated 2 years ago
- ☆27Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Updated 2 years ago
- Automatic differentiation for Triton Kernels☆29Updated 5 months ago
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆35Updated 4 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ☆103Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- ring-attention experiments☆161Updated last year
- ☆14Updated 10 months ago
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆36Updated last year
- ☆124Updated last year
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Updated 2 years ago
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆90Updated 5 months ago
- Mixed precision training from scratch with Tensors and CUDA☆28Updated last year
- A bunch of kernels that might make stuff slower 😉☆73Updated last week
- Transformers components but in Triton☆34Updated 8 months ago
- ☆83Updated 2 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆136Updated last year
- Triton-based Symmetric Memory operators and examples☆72Updated 2 months ago
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- ☆47Updated last year
- ☆15Updated 5 months ago
- Triton kernels for Flux☆22Updated 6 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Updated 2 weeks ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆207Updated this week
- Experiment of using Tangent to autodiff triton☆81Updated last year