ByteDance-Seed / Triton-distributedLinks
Distributed Compiler based on Triton for Parallel Systems
☆1,350Updated this week
Alternatives and similar repositories for Triton-distributed
Users that are interested in Triton-distributed are comparing it to the libraries listed below
Sorting:
- FlagGems is an operator library for large language models implemented in the Triton Language.☆893Updated last week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,242Updated 5 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆624Updated last month
- Perplexity GPU Kernels☆560Updated 3 months ago
- A Quirky Assortment of CuTe Kernels☆781Updated last week
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆926Updated 2 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆771Updated 10 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆683Updated this week
- flash attention tutorial written in python, triton, cuda, cutlass☆484Updated 3 weeks ago
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- Materials for learning SGLang☆738Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆945Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,120Updated 2 weeks ago
- kernels, of the mega variety☆672Updated 2 weeks ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆810Updated 11 months ago
- Ring attention implementation with flash attention☆979Updated 5 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆404Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Updated 6 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆522Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- High Performance LLM Inference Operator Library☆695Updated last week
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆792Updated 3 weeks ago
- Step-by-step optimization of CUDA SGEMM☆428Updated 3 years ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆617Updated last year
- Fastest kernels written from scratch☆533Updated 4 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆783Updated 3 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆739Updated last week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆462Updated this week