lix19937 / llm-deploy
AI Infra LLM infer/ tensorrt-llm/ vllm
☆18Updated 3 months ago
Alternatives and similar repositories for llm-deploy:
Users that are interested in llm-deploy are comparing it to the libraries listed below
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- 分层解耦的深度学习推理引擎☆72Updated last month
- ☆113Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- ☆145Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 2 weeks ago
- A light llama-like llm inference framework based on the triton kernel.☆100Updated 2 weeks ago
- ☆74Updated 3 months ago
- Collection of blogs on AI development☆19Updated 4 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆46Updated 2 months ago
- ☆36Updated 5 months ago
- ☆45Updated this week
- CUDA 6大并行计算模式 代码与笔记☆60Updated 4 years ago
- ☆87Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- A tutorial for CUDA&PyTorch☆130Updated 2 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Prefill Attention with O(1) GPU SRAM complexity for headdim > 256, ~2x↑🎉vs SDPA EA.☆154Updated this week
- ☆127Updated 3 months ago
- ☆58Updated 4 months ago
- ☆139Updated 11 months ago
- ☆40Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆90Updated last month
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆146Updated last month
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆80Updated 2 months ago