BBuf / tensorrt-llm-moe
☆23Updated last month
Alternatives and similar repositories for tensorrt-llm-moe:
Users that are interested in tensorrt-llm-moe are comparing it to the libraries listed below
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 2 weeks ago
- ☆10Updated 3 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- ☆29Updated 11 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆62Updated 3 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆90Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- ☆40Updated last week
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- ☆48Updated 2 months ago
- A practical way of learning Swizzle☆15Updated last month
- A CUDA kernel for NHWC GroupNorm for PyTorch☆18Updated 4 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated this week
- ☆88Updated 6 months ago
- Quantized Attention on GPU☆45Updated 4 months ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆13Updated 5 months ago
- ☆25Updated this week
- DeeperGEMM: crazy optimized version☆61Updated last week
- ☆45Updated this week
- study of cutlass☆21Updated 4 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing. By pro…☆70Updated this week
- ☆58Updated 4 months ago
- ☆36Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated 2 weeks ago
- GPTQ inference TVM kernel☆39Updated 11 months ago
- Implement Flash Attention using Cute.☆74Updated 3 months ago
- ☆19Updated 6 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month