perplexityai / pplx-kernelsLinks
Perplexity GPU Kernels
☆449Updated 3 weeks ago
Alternatives and similar repositories for pplx-kernels
Users that are interested in pplx-kernels are comparing it to the libraries listed below
Sorting:
- Distributed Compiler based on Triton for Parallel Systems☆1,074Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆407Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆408Updated 3 months ago
- kernels, of the mega variety☆481Updated 3 months ago
- A Quirky Assortment of CuTe Kernels☆435Updated this week
- Ultra and Unified CCL☆511Updated this week
- Zero Bubble Pipeline Parallelism☆421Updated 3 months ago
- NVIDIA Inference Xfer Library (NIXL)☆569Updated this week
- Materials for learning SGLang☆554Updated this week
- Efficient and easy multi-instance LLM serving☆475Updated 2 weeks ago
- Applied AI experiments and examples for PyTorch☆292Updated last week
- A lightweight design for computation-communication overlap.☆160Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆195Updated last week
- ☆111Updated 8 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆746Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆356Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆675Updated 4 months ago
- Fastest kernels written from scratch☆318Updated 4 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆344Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆404Updated this week
- ☆229Updated last year
- ☆284Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆881Updated 3 weeks ago
- LLM KV cache compression made easy☆596Updated this week
- Cataloging released Triton kernels.☆252Updated 7 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆537Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆663Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆214Updated this week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆266Updated 5 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆497Updated 8 months ago