tspeterkim / flash-attention-minimal
Flash Attention in ~100 lines of CUDA (forward pass only)
☆681Updated 2 weeks ago
Alternatives and similar repositories for flash-attention-minimal:
Users that are interested in flash-attention-minimal are comparing it to the libraries listed below
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆714Updated this week
- Tile primitives for speedy kernels☆1,923Updated this week
- Puzzles for learning Triton☆1,300Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆680Updated 4 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,797Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆496Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆668Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆481Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆583Updated this week
- GPU programming related news and material links☆1,312Updated last week
- Pipeline Parallelism for PyTorch☆736Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMs☆692Updated 3 months ago
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆211Updated this week
- ☆170Updated this week
- Fast CUDA matrix multiplication from scratch☆579Updated last year
- Applied AI experiments and examples for PyTorch☆211Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆644Updated this week
- Ring attention implementation with flash attention☆645Updated 3 weeks ago
- scalable and robust tree-based speculative decoding algorithm☆329Updated 5 months ago
- Microsoft Automatic Mixed Precision Library☆549Updated 3 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆244Updated 2 weeks ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆175Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,316Updated 6 months ago
- Serving multiple LoRA finetuned LLM as one☆1,012Updated 8 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆315Updated last month
- ☆310Updated 9 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆492Updated 2 months ago
- Cataloging released Triton kernels.☆155Updated last week