DD-DuDa / TensorRT-in-ActionLinks
TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。
☆15Updated 2 years ago
Alternatives and similar repositories for TensorRT-in-Action
Users that are interested in TensorRT-in-Action are comparing it to the libraries listed below
Sorting:
- ☆14Updated 11 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆74Updated 11 months ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.☆14Updated 9 months ago
- ☆26Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆10Updated last year
- ☆37Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆58Updated 8 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated this week
- ☆11Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago
- Awesome code, projects, books, etc. related to CUDA☆21Updated 3 weeks ago
- ☆137Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆31Updated last year
- ☆21Updated 4 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆45Updated 11 months ago
- Optimize softmax in triton in many cases☆21Updated 11 months ago
- ☆17Updated last year
- A tutorial for CUDA&PyTorch☆150Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 3 weeks ago
- 🎉My Collections of CUDA Kernels~☆11Updated last year
- A tool for model sparse based on torch.fx☆13Updated last year
- ☆139Updated last year
- Implement Flash Attention using Cute.☆92Updated 7 months ago
- CUDA 6大并行计算模式 代码与笔记☆60Updated 5 years ago
- ☆59Updated 8 months ago
- ☆67Updated 7 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆63Updated 10 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆101Updated 3 weeks ago
- base quantization methods including: QAT, PTQ, per_channel, per_tensor, dorefa, lsq, adaround, omse, Histogram, bias_correction.etc☆47Updated 2 years ago