daemyung / practice-tritonLinks
삼각형의 실전! Triton
☆16Updated last year
Alternatives and similar repositories for practice-triton
Users that are interested in practice-triton are comparing it to the libraries listed below
Sorting:
- A performance library for machine learning applications.☆184Updated 2 years ago
- OSLO: Open Source for Large-scale Optimization☆174Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆31Updated last month
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Updated 2 years ago
- ☆27Updated last year
- Pytorch/XLA SPMD Test code in Google TPU☆23Updated last year
- ☆103Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆83Updated last year
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Updated 2 years ago
- ☆83Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- ring-attention experiments☆153Updated 11 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆89Updated 2 months ago
- ☆120Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 6 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆94Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆130Updated 10 months ago
- ☆46Updated last year
- Official implementation for Training LLMs with MXFP4☆96Updated 5 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆96Updated 2 years ago
- Mixed precision training from scratch with Tensors and CUDA☆27Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- FriendliAI Model Hub☆91Updated 3 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆17Updated this week
- ☆127Updated last year