lix19937 / llm-deployLinks
AI Infra LLM infer/ tensorrt-llm/ vllm
☆20Updated 6 months ago
Alternatives and similar repositories for llm-deploy
Users that are interested in llm-deploy are comparing it to the libraries listed below
Sorting:
- ☆148Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 4 months ago
- ☆139Updated last year
- A tutorial for CUDA&PyTorch☆146Updated 5 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- ☆58Updated 7 months ago
- ☆128Updated 6 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 7 months ago
- ☆135Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- ☆69Updated last week
- ☆97Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆71Updated 10 months ago
- ☆80Updated last month
- ☆96Updated 9 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆80Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆128Updated 2 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- ☆36Updated 8 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- Serving Inside Pytorch☆160Updated 2 weeks ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆42Updated 10 months ago
- ☆86Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆110Updated 9 months ago
- CUDA 6大并行计算模式 代码与笔记☆61Updated 4 years ago
- A light llama-like llm inference framework based on the triton kernel.☆128Updated last week