OpenPPL / ppl.llm.servingLinks
☆130Updated last year
Alternatives and similar repositories for ppl.llm.serving
Users that are interested in ppl.llm.serving are comparing it to the libraries listed below
Sorting:
- ☆141Updated last year
- ☆152Updated last year
- ☆60Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- ☆155Updated 10 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 9 months ago
- ☆105Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated 4 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 5 months ago
- ☆96Updated 10 months ago
- ☆79Updated 2 years ago
- ☆144Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- LLM training technologies developed by kwai☆70Updated last week
- FlagCX is a scalable and adaptive cross-chip communication library.☆170Updated this week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 2 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Updated last year
- ☆76Updated last year
- High Performance LLM Inference Operator Library☆603Updated this week
- ☆523Updated last week
- ☆38Updated last year
- ☆112Updated 8 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆248Updated last week