dlsyscourse / lecture13
☆9Updated 3 months ago
Alternatives and similar repositories for lecture13:
Users that are interested in lecture13 are comparing it to the libraries listed below
- Standalone Flash Attention v2 kernel without libtorch dependency☆99Updated 4 months ago
- ☆9Updated last year
- GPTQ inference TVM kernel☆38Updated 8 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆21Updated 3 weeks ago
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- ☆48Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆87Updated 10 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 3 months ago
- pytorch-profiler☆50Updated last year
- ☆26Updated 3 years ago
- Codebase associated with the PyTorch compiler tutorial☆44Updated 5 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆34Updated 4 months ago
- Penn CIS 5650 (GPU Programming and Architecture) Final Project☆26Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆102Updated last month
- study of cutlass☆19Updated 2 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆62Updated 6 years ago
- llama INT4 cuda inference with AWQ☆49Updated 6 months ago
- Implement Flash Attention using Cute.☆65Updated last month
- TiledCUDA is a highly efficient kernel template library designed to elevate CUDA C’s level of abstraction for processing tiles.☆174Updated 2 months ago
- ☆78Updated 4 months ago
- A "gym" style toolkit for building lightweight NAS systems.☆13Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆93Updated 6 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆126Updated last year
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆27Updated 2 months ago
- This is a demo how to write a high performance convolution run on apple silicon☆52Updated 2 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- ☆22Updated 5 years ago