tile-ai / tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
☆18Updated this week
Alternatives and similar repositories for tvm:
Users that are interested in tvm are comparing it to the libraries listed below
- TensorRT LLM Benchmark Configuration☆13Updated 7 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆35Updated 2 weeks ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing. By pro…☆68Updated this week
- GPTQ inference TVM kernel☆39Updated 10 months ago
- Benchmark tests supporting the TiledCUDA library.☆15Updated 4 months ago
- Quantized Attention on GPU☆45Updated 4 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆17Updated 5 months ago
- ☆19Updated 5 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated this week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- ☆24Updated 3 months ago
- ☆26Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆59Updated 2 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- ☆46Updated 2 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆66Updated 9 months ago
- ☆88Updated 6 months ago
- Implement Flash Attention using Cute.☆74Updated 3 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆30Updated 3 weeks ago
- ☆61Updated 4 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago
- ☆29Updated 11 months ago