thuml / depyf
depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.
☆550Updated last month
Alternatives and similar repositories for depyf:
Users that are interested in depyf are comparing it to the libraries listed below
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆714Updated this week
- Pipeline Parallelism for PyTorch☆736Updated 4 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆397Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆229Updated 7 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆199Updated last month
- A Easy-to-understand TensorOp Matmul Tutorial☆306Updated 3 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,797Updated this week
- flash attention tutorial written in python, triton, cuda, cutlass☆244Updated 2 weeks ago
- A library to analyze PyTorch traces.☆323Updated last month
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆755Updated last week
- A fast communication-overlapping library for tensor parallelism on GPUs.☆271Updated 2 months ago
- Zero Bubble Pipeline Parallelism☆309Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆211Updated this week
- Ring attention implementation with flash attention☆645Updated 3 weeks ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆496Updated this week
- Microsoft Automatic Mixed Precision Library☆549Updated 3 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆681Updated 2 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- ☆170Updated this week
- Fast CUDA matrix multiplication from scratch☆579Updated last year
- Shared Middle-Layer for Triton Compilation☆220Updated this week
- Collection of kernels written in Triton language☆90Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆272Updated last month
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,016Updated 9 months ago
- Helpful tools and examples for working with flex-attention☆583Updated this week
- ☆154Updated 7 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆481Updated 2 months ago
- Step-by-step optimization of CUDA SGEMM☆270Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,669Updated this week