thuml / depyf
depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.
☆641Updated this week
Alternatives and similar repositories for depyf:
Users that are interested in depyf are comparing it to the libraries listed below
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆805Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,021Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆530Updated this week
- FlagGems is an operator library for large language models implemented in Triton Language.☆488Updated this week
- Pipeline Parallelism for PyTorch☆763Updated 7 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆281Updated 4 months ago
- Distributed Triton for Parallel Systems☆415Updated last week
- A Easy-to-understand TensorOp Matmul Tutorial☆342Updated 6 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆262Updated 10 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆328Updated 3 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆587Updated 2 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆779Updated 3 months ago
- Cataloging released Triton kernels.☆217Updated 3 months ago
- Fast CUDA matrix multiplication from scratch☆689Updated last year
- A library to analyze PyTorch traces.☆366Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆639Updated last month
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆794Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆268Updated this week
- A simple high performance CUDA GEMM implementation.☆360Updated last year
- Ring attention implementation with flash attention☆737Updated last week
- ☆199Updated this week
- Helpful tools and examples for working with flex-attention☆720Updated last week
- Fastest kernels written from scratch☆223Updated 2 weeks ago
- Collection of kernels written in Triton language☆118Updated 2 weeks ago
- Fast low-bit matmul kernels in Triton☆288Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆698Updated last month
- Puzzles for learning Triton☆1,577Updated 5 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆387Updated 7 months ago
- Tile primitives for speedy kernels☆2,259Updated this week
- Microsoft Automatic Mixed Precision Library☆591Updated 6 months ago