☆128Dec 24, 2024Updated last year
Alternatives and similar repositories for ppl.llm.serving
Users that are interested in ppl.llm.serving are comparing it to the libraries listed below
Sorting:
- ☆60Nov 21, 2024Updated last year
- ☆150Jan 9, 2025Updated last year
- ☆140Apr 23, 2024Updated last year
- ☆38Oct 12, 2024Updated last year
- ☆19Apr 6, 2024Updated last year
- A primitive library for neural network☆1,367Nov 24, 2024Updated last year
- Common libraries for PPL projects☆31Mar 10, 2025Updated last year
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Mar 20, 2023Updated 3 years ago
- CMake configurations for PPL projects☆12Aug 10, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Feb 27, 2025Updated last year
- 🎉My Collections of CUDA Kernels~☆10Jun 25, 2024Updated last year
- Experiments evaluating preemption on the NVIDIA Pascal architecture☆17Nov 10, 2016Updated 9 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- ppl.cv is a high-performance image processing library of openPPL supporting various platforms.☆515Oct 30, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,945Mar 13, 2026Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- HunyuanDiT with TensorRT and libtorch☆17May 22, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- llama INT4 cuda inference with AWQ☆53Jan 20, 2025Updated last year
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,062Updated this week
- qwen2 and llama3 cpp implementation☆49Jun 7, 2024Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 7 months ago
- ☆22Jul 11, 2023Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆17Jul 29, 2023Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- Python package for rematerialization-aware gradient checkpointing☆27Oct 31, 2023Updated 2 years ago
- Simple Dynamic Batching Inference☆145Mar 8, 2022Updated 4 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆818Mar 6, 2025Updated last year
- This is a demo how to write a high performance convolution run on apple silicon☆57Feb 8, 2022Updated 4 years ago
- PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.☆1,787Mar 28, 2024Updated last year
- ☆72Mar 26, 2025Updated 11 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago
- Perplexity GPU Kernels☆566Nov 7, 2025Updated 4 months ago
- ☆84Dec 2, 2022Updated 3 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆293Mar 10, 2026Updated last week