ppl-ai / pplx-kernels
Perplexity GPU Kernels
☆134Updated this week
Alternatives and similar repositories for pplx-kernels:
Users that are interested in pplx-kernels are comparing it to the libraries listed below
- extensible collectives library in triton☆84Updated this week
- ☆76Updated 4 months ago
- DeeperGEMM: crazy optimized version☆64Updated this week
- ☆60Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆103Updated 8 months ago
- ☆91Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆77Updated 5 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆74Updated this week
- Applied AI experiments and examples for PyTorch☆251Updated 2 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆180Updated 2 months ago
- ☆94Updated 3 weeks ago
- Fast low-bit matmul kernels in Triton☆275Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆107Updated this week
- ☆193Updated 8 months ago
- Cataloging released Triton kernels.☆213Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆243Updated 5 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- Fastest kernels written from scratch☆205Updated 3 weeks ago
- ☆76Updated last week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆236Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆333Updated last week
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 4 months ago
- High performance Transformer implementation in C++.☆113Updated 2 months ago
- ☆192Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆65Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆203Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- Microsoft Collective Communication Library☆64Updated 4 months ago
- ☆103Updated 7 months ago