daemyung / practice-tritonLinks
삼각형의 실전! Triton
☆16Updated last year
Alternatives and similar repositories for practice-triton
Users that are interested in practice-triton are comparing it to the libraries listed below
Sorting:
- A performance library for machine learning applications.☆185Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆35Updated 4 months ago
- ☆27Updated 2 years ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year
- Automatic differentiation for Triton Kernels☆29Updated 4 months ago
- ☆103Updated 2 years ago
- Mixed precision training from scratch with Tensors and CUDA☆28Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆136Updated last year
- Load compute kernels from the Hub☆357Updated 3 weeks ago
- ☆47Updated last year
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- Transformers components but in Triton☆34Updated 8 months ago
- ring-attention experiments☆161Updated last year
- ☆124Updated last year
- Official repository for K-EXAONE built by LG AI Research☆45Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆90Updated 5 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆92Updated 3 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Example of applying CUDA graphs to LLaMA-v2☆12Updated 2 years ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆207Updated this week
- OSLO: Open Source for Large-scale Optimization☆175Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Triton-based Symmetric Memory operators and examples☆72Updated 2 months ago