tspeterkim / flash-attention-minimal
Flash Attention in ~100 lines of CUDA (forward pass only)
☆626Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for flash-attention-minimal
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆636Updated this week
- Tile primitives for speedy kernels☆1,658Updated this week
- Puzzles for learning Triton☆1,135Updated this week
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆173Updated 7 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆420Updated this week
- Ring attention implementation with flash attention☆585Updated last week
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- scalable and robust tree-based speculative decoding algorithm☆315Updated 3 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆443Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆624Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆469Updated 3 weeks ago
- ☆152Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆652Updated last week
- Fast CUDA matrix multiplication from scratch☆479Updated 10 months ago
- GPU programming related news and material links☆1,237Updated last month
- Serving multiple LoRA finetuned LLM as one☆984Updated 6 months ago
- A throughput-oriented high-performance serving framework for LLMs☆636Updated 2 months ago
- Pipeline Parallelism for PyTorch☆726Updated 2 months ago
- Cataloging released Triton kernels.☆134Updated 2 months ago
- Simple and fast low-bit matmul kernels in CUDA / Triton☆143Updated this week
- Microsoft Automatic Mixed Precision Library☆525Updated last month
- Applied AI experiments and examples for PyTorch☆166Updated 2 weeks ago
- ☆167Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆165Updated this week
- Transformers with Arbitrarily Large Context☆641Updated 3 months ago
- ☆505Updated 3 weeks ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,339Updated this week
- ☆289Updated 7 months ago