☆130Dec 24, 2024Updated last year
Alternatives and similar repositories for ppl.llm.serving
Users that are interested in ppl.llm.serving are comparing it to the libraries listed below
Sorting:
- ☆60Nov 21, 2024Updated last year
- ☆152Jan 9, 2025Updated last year
- ☆141Apr 23, 2024Updated last year
- ☆38Oct 12, 2024Updated last year
- A primitive library for neural network☆1,366Nov 24, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Mar 20, 2023Updated 2 years ago
- Common libraries for PPL projects☆31Mar 10, 2025Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- llama INT4 cuda inference with AWQ☆54Jan 20, 2025Updated last year
- Experiments evaluating preemption on the NVIDIA Pascal architecture☆17Nov 10, 2016Updated 9 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,051Updated this week
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- ☆19Apr 6, 2024Updated last year
- qwen2 and llama3 cpp implementation☆49Jun 7, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- CMake configurations for PPL projects☆12Aug 10, 2024Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 6 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,901Feb 20, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆515Oct 30, 2024Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,022Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- A Triton JIT runtime and ffi provider in C++☆31Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆285Oct 19, 2025Updated 4 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 3 months ago
- ☆71Mar 26, 2025Updated 11 months ago
- play gemm with tvm☆92Jul 22, 2023Updated 2 years ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Apr 7, 2025Updated 10 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆410Feb 11, 2026Updated 2 weeks ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆523Sep 8, 2024Updated last year
- High performance Transformer implementation in C++.☆152Jan 18, 2025Updated last year
- Transformer related optimization, including BERT, GPT☆17Jul 29, 2023Updated 2 years ago
- ☆20Sep 28, 2024Updated last year